00:00:00.001 Started by upstream project "autotest-per-patch" build number 132745 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.109 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.109 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.111 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.183 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.259 Using shallow fetch with depth 1 00:00:00.259 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.259 > git --version # timeout=10 00:00:00.315 > git --version # 'git version 2.39.2' 00:00:00.315 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.359 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.359 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.351 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.363 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.376 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.376 > git config core.sparsecheckout # timeout=10 00:00:05.392 > git read-tree -mu HEAD # timeout=10 00:00:05.410 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.436 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.436 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.528 [Pipeline] Start of Pipeline 00:00:05.542 [Pipeline] library 00:00:05.544 Loading library shm_lib@master 00:00:05.544 Library shm_lib@master is cached. Copying from home. 00:00:05.562 [Pipeline] node 00:00:05.571 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.572 [Pipeline] { 00:00:05.582 [Pipeline] catchError 00:00:05.584 [Pipeline] { 00:00:05.597 [Pipeline] wrap 00:00:05.606 [Pipeline] { 00:00:05.615 [Pipeline] stage 00:00:05.617 [Pipeline] { (Prologue) 00:00:05.987 [Pipeline] sh 00:00:06.271 + logger -p user.info -t JENKINS-CI 00:00:06.286 [Pipeline] echo 00:00:06.288 Node: CYP11 00:00:06.294 [Pipeline] sh 00:00:06.587 [Pipeline] setCustomBuildProperty 00:00:06.600 [Pipeline] echo 00:00:06.602 Cleanup processes 00:00:06.607 [Pipeline] sh 00:00:06.910 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.910 2684534 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.925 [Pipeline] sh 00:00:07.204 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.204 ++ grep -v 'sudo pgrep' 00:00:07.204 ++ awk '{print $1}' 00:00:07.204 + sudo kill -9 00:00:07.204 + true 00:00:07.215 [Pipeline] cleanWs 00:00:07.223 [WS-CLEANUP] Deleting project workspace... 00:00:07.223 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.227 [WS-CLEANUP] done 00:00:07.230 [Pipeline] setCustomBuildProperty 00:00:07.242 [Pipeline] sh 00:00:07.518 + sudo git config --global --replace-all safe.directory '*' 00:00:07.624 [Pipeline] httpRequest 00:00:08.554 [Pipeline] echo 00:00:08.556 Sorcerer 10.211.164.101 is alive 00:00:08.566 [Pipeline] retry 00:00:08.569 [Pipeline] { 00:00:08.587 [Pipeline] httpRequest 00:00:08.592 HttpMethod: GET 00:00:08.593 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.593 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.621 Response Code: HTTP/1.1 200 OK 00:00:08.621 Success: Status code 200 is in the accepted range: 200,404 00:00:08.622 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.526 [Pipeline] } 00:00:24.544 [Pipeline] // retry 00:00:24.552 [Pipeline] sh 00:00:24.838 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.855 [Pipeline] httpRequest 00:00:25.265 [Pipeline] echo 00:00:25.266 Sorcerer 10.211.164.101 is alive 00:00:25.277 [Pipeline] retry 00:00:25.279 [Pipeline] { 00:00:25.294 [Pipeline] httpRequest 00:00:25.299 HttpMethod: GET 00:00:25.300 URL: http://10.211.164.101/packages/spdk_88dfb58dcfd4a72fa7a637504eb1ec14831245e8.tar.gz 00:00:25.300 Sending request to url: http://10.211.164.101/packages/spdk_88dfb58dcfd4a72fa7a637504eb1ec14831245e8.tar.gz 00:00:25.306 Response Code: HTTP/1.1 200 OK 00:00:25.306 Success: Status code 200 is in the accepted range: 200,404 00:00:25.306 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_88dfb58dcfd4a72fa7a637504eb1ec14831245e8.tar.gz 00:02:09.241 [Pipeline] } 00:02:09.259 [Pipeline] // retry 00:02:09.267 [Pipeline] sh 00:02:09.553 + tar --no-same-owner -xf spdk_88dfb58dcfd4a72fa7a637504eb1ec14831245e8.tar.gz 00:02:12.105 [Pipeline] sh 00:02:12.390 + git -C spdk log --oneline -n5 00:02:12.390 88dfb58dc nvme/rdma: Don't limit max_sge if UMR is used 00:02:12.390 8c97b8e7c nvme/rdma: Register UMR per IO request 00:02:12.390 52436cfa9 accel/mlx5: Support mkey registration 00:02:12.390 55a400896 accel/mlx5: Create pool of UMRs 00:02:12.390 562857cff lib/mlx5: API to configure UMR 00:02:12.401 [Pipeline] } 00:02:12.415 [Pipeline] // stage 00:02:12.423 [Pipeline] stage 00:02:12.425 [Pipeline] { (Prepare) 00:02:12.441 [Pipeline] writeFile 00:02:12.459 [Pipeline] sh 00:02:12.743 + logger -p user.info -t JENKINS-CI 00:02:12.756 [Pipeline] sh 00:02:13.059 + logger -p user.info -t JENKINS-CI 00:02:13.070 [Pipeline] sh 00:02:13.353 + cat autorun-spdk.conf 00:02:13.353 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.353 SPDK_TEST_NVMF=1 00:02:13.353 SPDK_TEST_NVME_CLI=1 00:02:13.353 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.353 SPDK_TEST_NVMF_NICS=e810 00:02:13.353 SPDK_TEST_VFIOUSER=1 00:02:13.353 SPDK_RUN_UBSAN=1 00:02:13.353 NET_TYPE=phy 00:02:13.360 RUN_NIGHTLY=0 00:02:13.364 [Pipeline] readFile 00:02:13.383 [Pipeline] withEnv 00:02:13.385 [Pipeline] { 00:02:13.394 [Pipeline] sh 00:02:13.677 + set -ex 00:02:13.678 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:13.678 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:13.678 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.678 ++ SPDK_TEST_NVMF=1 00:02:13.678 ++ SPDK_TEST_NVME_CLI=1 00:02:13.678 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.678 ++ SPDK_TEST_NVMF_NICS=e810 00:02:13.678 ++ SPDK_TEST_VFIOUSER=1 00:02:13.678 ++ SPDK_RUN_UBSAN=1 00:02:13.678 ++ NET_TYPE=phy 00:02:13.678 ++ RUN_NIGHTLY=0 00:02:13.678 + case $SPDK_TEST_NVMF_NICS in 00:02:13.678 + DRIVERS=ice 00:02:13.678 + [[ tcp == \r\d\m\a ]] 00:02:13.678 + [[ -n ice ]] 00:02:13.678 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:13.678 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:13.678 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:13.678 rmmod: ERROR: Module irdma is not currently loaded 00:02:13.678 rmmod: ERROR: Module i40iw is not currently loaded 00:02:13.678 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:13.678 + true 00:02:13.678 + for D in $DRIVERS 00:02:13.678 + sudo modprobe ice 00:02:13.678 + exit 0 00:02:13.687 [Pipeline] } 00:02:13.700 [Pipeline] // withEnv 00:02:13.706 [Pipeline] } 00:02:13.717 [Pipeline] // stage 00:02:13.725 [Pipeline] catchError 00:02:13.727 [Pipeline] { 00:02:13.737 [Pipeline] timeout 00:02:13.737 Timeout set to expire in 1 hr 0 min 00:02:13.738 [Pipeline] { 00:02:13.750 [Pipeline] stage 00:02:13.752 [Pipeline] { (Tests) 00:02:13.765 [Pipeline] sh 00:02:14.056 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.056 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.056 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.056 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:14.056 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:14.056 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.056 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:14.056 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.056 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:14.056 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:14.056 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:14.056 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:14.056 + source /etc/os-release 00:02:14.056 ++ NAME='Fedora Linux' 00:02:14.056 ++ VERSION='39 (Cloud Edition)' 00:02:14.056 ++ ID=fedora 00:02:14.056 ++ VERSION_ID=39 00:02:14.056 ++ VERSION_CODENAME= 00:02:14.056 ++ PLATFORM_ID=platform:f39 00:02:14.056 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.056 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.056 ++ LOGO=fedora-logo-icon 00:02:14.056 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.056 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.056 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.057 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.057 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.057 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.057 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.057 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.057 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.057 ++ SUPPORT_END=2024-11-12 00:02:14.057 ++ VARIANT='Cloud Edition' 00:02:14.057 ++ VARIANT_ID=cloud 00:02:14.057 + uname -a 00:02:14.057 Linux spdk-cyp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.057 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:16.658 Hugepages 00:02:16.658 node hugesize free / total 00:02:16.658 node0 1048576kB 0 / 0 00:02:16.658 node0 2048kB 0 / 0 00:02:16.658 node1 1048576kB 0 / 0 00:02:16.658 node1 2048kB 0 / 0 00:02:16.658 00:02:16.658 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:16.658 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:16.658 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:16.658 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:16.658 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:16.658 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:16.658 + rm -f /tmp/spdk-ld-path 00:02:16.658 + source autorun-spdk.conf 00:02:16.658 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.658 ++ SPDK_TEST_NVMF=1 00:02:16.658 ++ SPDK_TEST_NVME_CLI=1 00:02:16.658 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.658 ++ SPDK_TEST_NVMF_NICS=e810 00:02:16.658 ++ SPDK_TEST_VFIOUSER=1 00:02:16.658 ++ SPDK_RUN_UBSAN=1 00:02:16.658 ++ NET_TYPE=phy 00:02:16.658 ++ RUN_NIGHTLY=0 00:02:16.658 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:16.658 + [[ -n '' ]] 00:02:16.658 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.658 + for M in /var/spdk/build-*-manifest.txt 00:02:16.658 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:16.658 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.658 + for M in /var/spdk/build-*-manifest.txt 00:02:16.658 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:16.658 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.658 + for M in /var/spdk/build-*-manifest.txt 00:02:16.658 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:16.658 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:16.658 ++ uname 00:02:16.658 + [[ Linux == \L\i\n\u\x ]] 00:02:16.658 + sudo dmesg -T 00:02:16.658 + sudo dmesg --clear 00:02:16.658 + dmesg_pid=2685762 00:02:16.658 + [[ Fedora Linux == FreeBSD ]] 00:02:16.658 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.658 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:16.658 + sudo dmesg -Tw 00:02:16.658 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:16.658 + [[ -x /usr/src/fio-static/fio ]] 00:02:16.658 + export FIO_BIN=/usr/src/fio-static/fio 00:02:16.658 + FIO_BIN=/usr/src/fio-static/fio 00:02:16.658 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:16.658 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:16.658 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:16.658 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.658 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:16.659 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:16.659 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.659 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:16.659 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.659 17:39:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:16.659 17:39:04 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:16.659 17:39:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:16.659 17:39:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:16.659 17:39:04 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:16.659 17:39:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:16.659 17:39:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:16.659 17:39:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:16.659 17:39:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:16.659 17:39:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:16.659 17:39:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:16.659 17:39:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.659 17:39:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.659 17:39:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.659 17:39:04 -- paths/export.sh@5 -- $ export PATH 00:02:16.659 17:39:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:16.659 17:39:04 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:16.659 17:39:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:16.659 17:39:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733503144.XXXXXX 00:02:16.659 17:39:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733503144.yR2RA4 00:02:16.659 17:39:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:16.659 17:39:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:16.659 17:39:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:16.659 17:39:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:16.659 17:39:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:16.659 17:39:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:16.659 17:39:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:16.659 17:39:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.659 17:39:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:16.659 17:39:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:16.659 17:39:04 -- pm/common@17 -- $ local monitor 00:02:16.659 17:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.659 17:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.659 17:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.659 17:39:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:16.659 17:39:04 -- pm/common@25 -- $ sleep 1 00:02:16.659 17:39:04 -- pm/common@21 -- $ date +%s 00:02:16.659 17:39:04 -- pm/common@21 -- $ date +%s 00:02:16.659 17:39:04 -- pm/common@21 -- $ date +%s 00:02:16.659 17:39:04 -- pm/common@21 -- $ date +%s 00:02:16.659 17:39:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733503144 00:02:16.659 17:39:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733503144 00:02:16.659 17:39:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733503144 00:02:16.659 17:39:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733503144 00:02:16.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733503144_collect-cpu-load.pm.log 00:02:16.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733503144_collect-vmstat.pm.log 00:02:16.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733503144_collect-cpu-temp.pm.log 00:02:16.659 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733503144_collect-bmc-pm.bmc.pm.log 00:02:17.600 17:39:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:17.600 17:39:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:17.600 17:39:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:17.600 17:39:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:17.600 17:39:05 -- spdk/autobuild.sh@16 -- $ date -u 00:02:17.600 Fri Dec 6 04:39:05 PM UTC 2024 00:02:17.600 17:39:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:17.600 v25.01-pre-308-g88dfb58dc 00:02:17.600 17:39:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:17.600 17:39:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:17.600 17:39:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:17.600 17:39:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.600 17:39:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.600 17:39:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.600 ************************************ 00:02:17.600 START TEST ubsan 00:02:17.600 ************************************ 00:02:17.600 17:39:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:17.600 using ubsan 00:02:17.600 00:02:17.600 real 0m0.000s 00:02:17.600 user 0m0.000s 00:02:17.600 sys 0m0.000s 00:02:17.600 17:39:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:17.600 17:39:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.600 ************************************ 00:02:17.600 END TEST ubsan 00:02:17.600 ************************************ 00:02:17.600 17:39:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:17.600 17:39:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:17.600 17:39:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:17.600 17:39:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:17.600 17:39:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:17.600 17:39:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:17.600 17:39:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:17.600 17:39:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:17.601 17:39:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:17.601 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:17.601 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:17.860 Using 'verbs' RDMA provider 00:02:28.417 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:38.461 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:38.461 Creating mk/config.mk...done. 00:02:38.461 Creating mk/cc.flags.mk...done. 00:02:38.461 Type 'make' to build. 00:02:38.461 17:39:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:38.461 17:39:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:38.461 17:39:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:38.461 17:39:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.461 ************************************ 00:02:38.461 START TEST make 00:02:38.461 ************************************ 00:02:38.461 17:39:25 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:38.461 make[1]: Nothing to be done for 'all'. 00:02:39.402 The Meson build system 00:02:39.402 Version: 1.5.0 00:02:39.402 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:39.402 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:39.402 Build type: native build 00:02:39.402 Project name: libvfio-user 00:02:39.402 Project version: 0.0.1 00:02:39.402 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.402 C linker for the host machine: cc ld.bfd 2.40-14 00:02:39.402 Host machine cpu family: x86_64 00:02:39.402 Host machine cpu: x86_64 00:02:39.402 Run-time dependency threads found: YES 00:02:39.402 Library dl found: YES 00:02:39.402 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.402 Run-time dependency json-c found: YES 0.17 00:02:39.402 Run-time dependency cmocka found: YES 1.1.7 00:02:39.402 Program pytest-3 found: NO 00:02:39.402 Program flake8 found: NO 00:02:39.402 Program misspell-fixer found: NO 00:02:39.402 Program restructuredtext-lint found: NO 00:02:39.402 Program valgrind found: YES (/usr/bin/valgrind) 00:02:39.402 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.402 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.402 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.402 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:39.402 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:39.402 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:39.402 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:39.402 Build targets in project: 8 00:02:39.402 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:39.402 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:39.402 00:02:39.402 libvfio-user 0.0.1 00:02:39.402 00:02:39.402 User defined options 00:02:39.402 buildtype : debug 00:02:39.402 default_library: shared 00:02:39.402 libdir : /usr/local/lib 00:02:39.402 00:02:39.402 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.973 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:39.973 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:39.973 [2/37] Compiling C object samples/null.p/null.c.o 00:02:39.973 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:39.973 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:39.973 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:39.973 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:39.973 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:39.973 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:39.973 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:39.973 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:39.973 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:39.973 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:39.973 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:39.973 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:39.973 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:39.973 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:39.973 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:39.973 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:39.973 [19/37] Compiling C object samples/server.p/server.c.o 00:02:39.973 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:39.973 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:39.973 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:39.973 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:39.973 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:39.973 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:39.974 [26/37] Compiling C object samples/client.p/client.c.o 00:02:39.974 [27/37] Linking target samples/client 00:02:39.974 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:39.974 [29/37] Linking target test/unit_tests 00:02:39.974 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:39.974 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:40.233 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:40.233 [33/37] Linking target samples/server 00:02:40.233 [34/37] Linking target samples/gpio-pci-idio-16 00:02:40.233 [35/37] Linking target samples/lspci 00:02:40.233 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:40.233 [37/37] Linking target samples/null 00:02:40.233 INFO: autodetecting backend as ninja 00:02:40.233 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.233 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.492 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:40.492 ninja: no work to do. 00:02:43.776 The Meson build system 00:02:43.776 Version: 1.5.0 00:02:43.776 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:43.777 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:43.777 Build type: native build 00:02:43.777 Program cat found: YES (/usr/bin/cat) 00:02:43.777 Project name: DPDK 00:02:43.777 Project version: 24.03.0 00:02:43.777 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.777 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.777 Host machine cpu family: x86_64 00:02:43.777 Host machine cpu: x86_64 00:02:43.777 Message: ## Building in Developer Mode ## 00:02:43.777 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.777 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:43.777 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.777 Program python3 found: YES (/usr/bin/python3) 00:02:43.777 Program cat found: YES (/usr/bin/cat) 00:02:43.777 Compiler for C supports arguments -march=native: YES 00:02:43.777 Checking for size of "void *" : 8 00:02:43.777 Checking for size of "void *" : 8 (cached) 00:02:43.777 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:43.777 Library m found: YES 00:02:43.777 Library numa found: YES 00:02:43.777 Has header "numaif.h" : YES 00:02:43.777 Library fdt found: NO 00:02:43.777 Library execinfo found: NO 00:02:43.777 Has header "execinfo.h" : YES 00:02:43.777 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.777 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.777 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.777 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.777 Run-time dependency openssl found: YES 3.1.1 00:02:43.777 Run-time dependency libpcap found: YES 1.10.4 00:02:43.777 Has header "pcap.h" with dependency libpcap: YES 00:02:43.777 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.777 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.777 Compiler for C supports arguments -Wformat: YES 00:02:43.777 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.777 Compiler for C supports arguments -Wformat-security: NO 00:02:43.777 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.777 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.777 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.777 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.777 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.777 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.777 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.777 Compiler for C supports arguments -Wundef: YES 00:02:43.777 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.777 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.777 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.777 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.777 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.777 Program objdump found: YES (/usr/bin/objdump) 00:02:43.777 Compiler for C supports arguments -mavx512f: YES 00:02:43.777 Checking if "AVX512 checking" compiles: YES 00:02:43.777 Fetching value of define "__SSE4_2__" : 1 00:02:43.777 Fetching value of define "__AES__" : 1 00:02:43.777 Fetching value of define "__AVX__" : 1 00:02:43.777 Fetching value of define "__AVX2__" : 1 00:02:43.777 Fetching value of define "__AVX512BW__" : 1 00:02:43.777 Fetching value of define "__AVX512CD__" : 1 00:02:43.777 Fetching value of define "__AVX512DQ__" : 1 00:02:43.777 Fetching value of define "__AVX512F__" : 1 00:02:43.777 Fetching value of define "__AVX512VL__" : 1 00:02:43.777 Fetching value of define "__PCLMUL__" : 1 00:02:43.777 Fetching value of define "__RDRND__" : 1 00:02:43.777 Fetching value of define "__RDSEED__" : 1 00:02:43.777 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:43.777 Fetching value of define "__znver1__" : (undefined) 00:02:43.777 Fetching value of define "__znver2__" : (undefined) 00:02:43.777 Fetching value of define "__znver3__" : (undefined) 00:02:43.777 Fetching value of define "__znver4__" : (undefined) 00:02:43.777 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.777 Message: lib/log: Defining dependency "log" 00:02:43.777 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.777 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.777 Checking for function "getentropy" : NO 00:02:43.777 Message: lib/eal: Defining dependency "eal" 00:02:43.777 Message: lib/ring: Defining dependency "ring" 00:02:43.777 Message: lib/rcu: Defining dependency "rcu" 00:02:43.777 Message: lib/mempool: Defining dependency "mempool" 00:02:43.777 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.777 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.777 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.777 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.777 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.777 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.777 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:43.777 Compiler for C supports arguments -mpclmul: YES 00:02:43.777 Compiler for C supports arguments -maes: YES 00:02:43.777 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.777 Compiler for C supports arguments -mavx512bw: YES 00:02:43.777 Compiler for C supports arguments -mavx512dq: YES 00:02:43.777 Compiler for C supports arguments -mavx512vl: YES 00:02:43.777 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.777 Compiler for C supports arguments -mavx2: YES 00:02:43.777 Compiler for C supports arguments -mavx: YES 00:02:43.777 Message: lib/net: Defining dependency "net" 00:02:43.777 Message: lib/meter: Defining dependency "meter" 00:02:43.777 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.777 Message: lib/pci: Defining dependency "pci" 00:02:43.777 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.777 Message: lib/hash: Defining dependency "hash" 00:02:43.777 Message: lib/timer: Defining dependency "timer" 00:02:43.777 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.777 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.777 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.777 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.777 Message: lib/power: Defining dependency "power" 00:02:43.777 Message: lib/reorder: Defining dependency "reorder" 00:02:43.777 Message: lib/security: Defining dependency "security" 00:02:43.777 Has header "linux/userfaultfd.h" : YES 00:02:43.777 Has header "linux/vduse.h" : YES 00:02:43.777 Message: lib/vhost: Defining dependency "vhost" 00:02:43.777 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.777 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.777 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.777 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.777 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:43.777 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:43.777 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:43.777 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:43.777 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:43.777 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:43.777 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.777 Configuring doxy-api-html.conf using configuration 00:02:43.777 Configuring doxy-api-man.conf using configuration 00:02:43.777 Program mandb found: YES (/usr/bin/mandb) 00:02:43.777 Program sphinx-build found: NO 00:02:43.777 Configuring rte_build_config.h using configuration 00:02:43.777 Message: 00:02:43.777 ================= 00:02:43.777 Applications Enabled 00:02:43.777 ================= 00:02:43.777 00:02:43.777 apps: 00:02:43.777 00:02:43.777 00:02:43.777 Message: 00:02:43.777 ================= 00:02:43.777 Libraries Enabled 00:02:43.777 ================= 00:02:43.777 00:02:43.777 libs: 00:02:43.777 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:43.777 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:43.777 cryptodev, dmadev, power, reorder, security, vhost, 00:02:43.777 00:02:43.777 Message: 00:02:43.777 =============== 00:02:43.777 Drivers Enabled 00:02:43.777 =============== 00:02:43.777 00:02:43.777 common: 00:02:43.777 00:02:43.777 bus: 00:02:43.777 pci, vdev, 00:02:43.777 mempool: 00:02:43.777 ring, 00:02:43.777 dma: 00:02:43.777 00:02:43.777 net: 00:02:43.777 00:02:43.777 crypto: 00:02:43.777 00:02:43.777 compress: 00:02:43.777 00:02:43.777 vdpa: 00:02:43.777 00:02:43.777 00:02:43.777 Message: 00:02:43.777 ================= 00:02:43.777 Content Skipped 00:02:43.777 ================= 00:02:43.777 00:02:43.777 apps: 00:02:43.777 dumpcap: explicitly disabled via build config 00:02:43.777 graph: explicitly disabled via build config 00:02:43.777 pdump: explicitly disabled via build config 00:02:43.777 proc-info: explicitly disabled via build config 00:02:43.777 test-acl: explicitly disabled via build config 00:02:43.777 test-bbdev: explicitly disabled via build config 00:02:43.777 test-cmdline: explicitly disabled via build config 00:02:43.777 test-compress-perf: explicitly disabled via build config 00:02:43.777 test-crypto-perf: explicitly disabled via build config 00:02:43.777 test-dma-perf: explicitly disabled via build config 00:02:43.777 test-eventdev: explicitly disabled via build config 00:02:43.777 test-fib: explicitly disabled via build config 00:02:43.777 test-flow-perf: explicitly disabled via build config 00:02:43.777 test-gpudev: explicitly disabled via build config 00:02:43.777 test-mldev: explicitly disabled via build config 00:02:43.777 test-pipeline: explicitly disabled via build config 00:02:43.777 test-pmd: explicitly disabled via build config 00:02:43.777 test-regex: explicitly disabled via build config 00:02:43.777 test-sad: explicitly disabled via build config 00:02:43.778 test-security-perf: explicitly disabled via build config 00:02:43.778 00:02:43.778 libs: 00:02:43.778 argparse: explicitly disabled via build config 00:02:43.778 metrics: explicitly disabled via build config 00:02:43.778 acl: explicitly disabled via build config 00:02:43.778 bbdev: explicitly disabled via build config 00:02:43.778 bitratestats: explicitly disabled via build config 00:02:43.778 bpf: explicitly disabled via build config 00:02:43.778 cfgfile: explicitly disabled via build config 00:02:43.778 distributor: explicitly disabled via build config 00:02:43.778 efd: explicitly disabled via build config 00:02:43.778 eventdev: explicitly disabled via build config 00:02:43.778 dispatcher: explicitly disabled via build config 00:02:43.778 gpudev: explicitly disabled via build config 00:02:43.778 gro: explicitly disabled via build config 00:02:43.778 gso: explicitly disabled via build config 00:02:43.778 ip_frag: explicitly disabled via build config 00:02:43.778 jobstats: explicitly disabled via build config 00:02:43.778 latencystats: explicitly disabled via build config 00:02:43.778 lpm: explicitly disabled via build config 00:02:43.778 member: explicitly disabled via build config 00:02:43.778 pcapng: explicitly disabled via build config 00:02:43.778 rawdev: explicitly disabled via build config 00:02:43.778 regexdev: explicitly disabled via build config 00:02:43.778 mldev: explicitly disabled via build config 00:02:43.778 rib: explicitly disabled via build config 00:02:43.778 sched: explicitly disabled via build config 00:02:43.778 stack: explicitly disabled via build config 00:02:43.778 ipsec: explicitly disabled via build config 00:02:43.778 pdcp: explicitly disabled via build config 00:02:43.778 fib: explicitly disabled via build config 00:02:43.778 port: explicitly disabled via build config 00:02:43.778 pdump: explicitly disabled via build config 00:02:43.778 table: explicitly disabled via build config 00:02:43.778 pipeline: explicitly disabled via build config 00:02:43.778 graph: explicitly disabled via build config 00:02:43.778 node: explicitly disabled via build config 00:02:43.778 00:02:43.778 drivers: 00:02:43.778 common/cpt: not in enabled drivers build config 00:02:43.778 common/dpaax: not in enabled drivers build config 00:02:43.778 common/iavf: not in enabled drivers build config 00:02:43.778 common/idpf: not in enabled drivers build config 00:02:43.778 common/ionic: not in enabled drivers build config 00:02:43.778 common/mvep: not in enabled drivers build config 00:02:43.778 common/octeontx: not in enabled drivers build config 00:02:43.778 bus/auxiliary: not in enabled drivers build config 00:02:43.778 bus/cdx: not in enabled drivers build config 00:02:43.778 bus/dpaa: not in enabled drivers build config 00:02:43.778 bus/fslmc: not in enabled drivers build config 00:02:43.778 bus/ifpga: not in enabled drivers build config 00:02:43.778 bus/platform: not in enabled drivers build config 00:02:43.778 bus/uacce: not in enabled drivers build config 00:02:43.778 bus/vmbus: not in enabled drivers build config 00:02:43.778 common/cnxk: not in enabled drivers build config 00:02:43.778 common/mlx5: not in enabled drivers build config 00:02:43.778 common/nfp: not in enabled drivers build config 00:02:43.778 common/nitrox: not in enabled drivers build config 00:02:43.778 common/qat: not in enabled drivers build config 00:02:43.778 common/sfc_efx: not in enabled drivers build config 00:02:43.778 mempool/bucket: not in enabled drivers build config 00:02:43.778 mempool/cnxk: not in enabled drivers build config 00:02:43.778 mempool/dpaa: not in enabled drivers build config 00:02:43.778 mempool/dpaa2: not in enabled drivers build config 00:02:43.778 mempool/octeontx: not in enabled drivers build config 00:02:43.778 mempool/stack: not in enabled drivers build config 00:02:43.778 dma/cnxk: not in enabled drivers build config 00:02:43.778 dma/dpaa: not in enabled drivers build config 00:02:43.778 dma/dpaa2: not in enabled drivers build config 00:02:43.778 dma/hisilicon: not in enabled drivers build config 00:02:43.778 dma/idxd: not in enabled drivers build config 00:02:43.778 dma/ioat: not in enabled drivers build config 00:02:43.778 dma/skeleton: not in enabled drivers build config 00:02:43.778 net/af_packet: not in enabled drivers build config 00:02:43.778 net/af_xdp: not in enabled drivers build config 00:02:43.778 net/ark: not in enabled drivers build config 00:02:43.778 net/atlantic: not in enabled drivers build config 00:02:43.778 net/avp: not in enabled drivers build config 00:02:43.778 net/axgbe: not in enabled drivers build config 00:02:43.778 net/bnx2x: not in enabled drivers build config 00:02:43.778 net/bnxt: not in enabled drivers build config 00:02:43.778 net/bonding: not in enabled drivers build config 00:02:43.778 net/cnxk: not in enabled drivers build config 00:02:43.778 net/cpfl: not in enabled drivers build config 00:02:43.778 net/cxgbe: not in enabled drivers build config 00:02:43.778 net/dpaa: not in enabled drivers build config 00:02:43.778 net/dpaa2: not in enabled drivers build config 00:02:43.778 net/e1000: not in enabled drivers build config 00:02:43.778 net/ena: not in enabled drivers build config 00:02:43.778 net/enetc: not in enabled drivers build config 00:02:43.778 net/enetfec: not in enabled drivers build config 00:02:43.778 net/enic: not in enabled drivers build config 00:02:43.778 net/failsafe: not in enabled drivers build config 00:02:43.778 net/fm10k: not in enabled drivers build config 00:02:43.778 net/gve: not in enabled drivers build config 00:02:43.778 net/hinic: not in enabled drivers build config 00:02:43.778 net/hns3: not in enabled drivers build config 00:02:43.778 net/i40e: not in enabled drivers build config 00:02:43.778 net/iavf: not in enabled drivers build config 00:02:43.778 net/ice: not in enabled drivers build config 00:02:43.778 net/idpf: not in enabled drivers build config 00:02:43.778 net/igc: not in enabled drivers build config 00:02:43.778 net/ionic: not in enabled drivers build config 00:02:43.778 net/ipn3ke: not in enabled drivers build config 00:02:43.778 net/ixgbe: not in enabled drivers build config 00:02:43.778 net/mana: not in enabled drivers build config 00:02:43.778 net/memif: not in enabled drivers build config 00:02:43.778 net/mlx4: not in enabled drivers build config 00:02:43.778 net/mlx5: not in enabled drivers build config 00:02:43.778 net/mvneta: not in enabled drivers build config 00:02:43.778 net/mvpp2: not in enabled drivers build config 00:02:43.778 net/netvsc: not in enabled drivers build config 00:02:43.778 net/nfb: not in enabled drivers build config 00:02:43.778 net/nfp: not in enabled drivers build config 00:02:43.778 net/ngbe: not in enabled drivers build config 00:02:43.778 net/null: not in enabled drivers build config 00:02:43.778 net/octeontx: not in enabled drivers build config 00:02:43.778 net/octeon_ep: not in enabled drivers build config 00:02:43.778 net/pcap: not in enabled drivers build config 00:02:43.778 net/pfe: not in enabled drivers build config 00:02:43.778 net/qede: not in enabled drivers build config 00:02:43.778 net/ring: not in enabled drivers build config 00:02:43.778 net/sfc: not in enabled drivers build config 00:02:43.778 net/softnic: not in enabled drivers build config 00:02:43.778 net/tap: not in enabled drivers build config 00:02:43.778 net/thunderx: not in enabled drivers build config 00:02:43.778 net/txgbe: not in enabled drivers build config 00:02:43.778 net/vdev_netvsc: not in enabled drivers build config 00:02:43.778 net/vhost: not in enabled drivers build config 00:02:43.778 net/virtio: not in enabled drivers build config 00:02:43.778 net/vmxnet3: not in enabled drivers build config 00:02:43.778 raw/*: missing internal dependency, "rawdev" 00:02:43.778 crypto/armv8: not in enabled drivers build config 00:02:43.778 crypto/bcmfs: not in enabled drivers build config 00:02:43.778 crypto/caam_jr: not in enabled drivers build config 00:02:43.778 crypto/ccp: not in enabled drivers build config 00:02:43.778 crypto/cnxk: not in enabled drivers build config 00:02:43.778 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.778 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.778 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.778 crypto/mlx5: not in enabled drivers build config 00:02:43.778 crypto/mvsam: not in enabled drivers build config 00:02:43.778 crypto/nitrox: not in enabled drivers build config 00:02:43.778 crypto/null: not in enabled drivers build config 00:02:43.778 crypto/octeontx: not in enabled drivers build config 00:02:43.778 crypto/openssl: not in enabled drivers build config 00:02:43.778 crypto/scheduler: not in enabled drivers build config 00:02:43.778 crypto/uadk: not in enabled drivers build config 00:02:43.778 crypto/virtio: not in enabled drivers build config 00:02:43.778 compress/isal: not in enabled drivers build config 00:02:43.778 compress/mlx5: not in enabled drivers build config 00:02:43.778 compress/nitrox: not in enabled drivers build config 00:02:43.778 compress/octeontx: not in enabled drivers build config 00:02:43.778 compress/zlib: not in enabled drivers build config 00:02:43.778 regex/*: missing internal dependency, "regexdev" 00:02:43.778 ml/*: missing internal dependency, "mldev" 00:02:43.778 vdpa/ifc: not in enabled drivers build config 00:02:43.778 vdpa/mlx5: not in enabled drivers build config 00:02:43.778 vdpa/nfp: not in enabled drivers build config 00:02:43.778 vdpa/sfc: not in enabled drivers build config 00:02:43.778 event/*: missing internal dependency, "eventdev" 00:02:43.778 baseband/*: missing internal dependency, "bbdev" 00:02:43.778 gpu/*: missing internal dependency, "gpudev" 00:02:43.778 00:02:43.778 00:02:44.037 Build targets in project: 84 00:02:44.037 00:02:44.037 DPDK 24.03.0 00:02:44.037 00:02:44.037 User defined options 00:02:44.037 buildtype : debug 00:02:44.037 default_library : shared 00:02:44.037 libdir : lib 00:02:44.037 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:44.037 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:44.037 c_link_args : 00:02:44.037 cpu_instruction_set: native 00:02:44.037 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:44.037 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:44.037 enable_docs : false 00:02:44.037 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:44.037 enable_kmods : false 00:02:44.037 max_lcores : 128 00:02:44.037 tests : false 00:02:44.037 00:02:44.037 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.304 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:44.304 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.304 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.304 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.304 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.304 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.304 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.304 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.304 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.304 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.562 [10/267] Linking static target lib/librte_kvargs.a 00:02:44.562 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.562 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.562 [13/267] Linking static target lib/librte_log.a 00:02:44.562 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.562 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.562 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.562 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.562 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.562 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.562 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.562 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:44.562 [22/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.562 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.562 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.562 [25/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.562 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.562 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.562 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.562 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.562 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.562 [31/267] Linking static target lib/librte_pci.a 00:02:44.562 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.562 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.562 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.849 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.849 [36/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.849 [37/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.849 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.849 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.849 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.849 [41/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.849 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.849 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.849 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.849 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.849 [46/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.849 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.849 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.849 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.849 [50/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.849 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.849 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.849 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.849 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.849 [55/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.849 [56/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.849 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.849 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.849 [59/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.849 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.849 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.849 [62/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.849 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.849 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.849 [65/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.849 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.849 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.849 [68/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.849 [69/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.849 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.849 [71/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.849 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.849 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.849 [74/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.849 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.849 [76/267] Linking static target lib/librte_telemetry.a 00:02:44.849 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.849 [78/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.849 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.849 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.849 [81/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.849 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.849 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.849 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.849 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.849 [86/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.849 [87/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.849 [88/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.849 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.850 [90/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:44.850 [91/267] Linking static target lib/librte_meter.a 00:02:44.850 [92/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.850 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.850 [94/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.850 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.850 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.850 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.850 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.850 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.108 [100/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.108 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.108 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.108 [103/267] Linking static target lib/librte_ring.a 00:02:45.108 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.108 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.108 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.108 [107/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.108 [108/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.108 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.109 [110/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.109 [111/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.109 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.109 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.109 [114/267] Linking static target lib/librte_dmadev.a 00:02:45.109 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.109 [116/267] Linking static target lib/librte_timer.a 00:02:45.109 [117/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.109 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.109 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.109 [120/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.109 [121/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.109 [122/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:45.109 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.109 [124/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:45.109 [125/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:45.109 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.109 [127/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.109 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.109 [129/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.109 [130/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.109 [131/267] Linking static target lib/librte_reorder.a 00:02:45.109 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.109 [133/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.109 [134/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:45.109 [135/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:45.109 [136/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:45.109 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.109 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.109 [139/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.109 [140/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.109 [141/267] Linking static target lib/librte_compressdev.a 00:02:45.109 [142/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.109 [143/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.109 [144/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.109 [145/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.109 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.109 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.109 [148/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.109 [149/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.109 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.109 [151/267] Linking static target lib/librte_net.a 00:02:45.109 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.109 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.109 [154/267] Linking static target lib/librte_cmdline.a 00:02:45.109 [155/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.109 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:45.109 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.109 [158/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.109 [159/267] Linking static target lib/librte_power.a 00:02:45.109 [160/267] Linking target lib/librte_log.so.24.1 00:02:45.109 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.109 [162/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.109 [163/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.109 [164/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.109 [165/267] Linking static target lib/librte_rcu.a 00:02:45.109 [166/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.109 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.109 [168/267] Linking static target lib/librte_mempool.a 00:02:45.109 [169/267] Linking static target lib/librte_eal.a 00:02:45.109 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.109 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.109 [172/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.109 [173/267] Linking static target lib/librte_security.a 00:02:45.109 [174/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.109 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.109 [176/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.109 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.109 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.109 [179/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.109 [180/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.109 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.109 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.109 [183/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.109 [184/267] Linking target lib/librte_kvargs.so.24.1 00:02:45.368 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.368 [186/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.368 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.368 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.368 [189/267] Linking static target lib/librte_mbuf.a 00:02:45.368 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.368 [191/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.368 [192/267] Linking static target lib/librte_hash.a 00:02:45.368 [193/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.368 [194/267] Linking static target drivers/librte_bus_vdev.a 00:02:45.368 [195/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [196/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [197/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [198/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [199/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.368 [200/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.368 [201/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.368 [202/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.368 [203/267] Linking target lib/librte_telemetry.so.24.1 00:02:45.368 [204/267] Linking static target drivers/librte_bus_pci.a 00:02:45.368 [205/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.368 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [207/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.368 [208/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.368 [209/267] Linking static target drivers/librte_mempool_ring.a 00:02:45.368 [210/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:45.368 [211/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.368 [212/267] Linking static target lib/librte_cryptodev.a 00:02:45.368 [213/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.368 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.368 [217/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [220/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [222/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.627 [223/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.627 [225/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.627 [226/267] Linking static target lib/librte_ethdev.a 00:02:46.562 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.562 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.562 [229/267] Linking static target lib/librte_vhost.a 00:02:47.950 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.482 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.740 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.999 [233/267] Linking target lib/librte_eal.so.24.1 00:02:50.999 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.999 [235/267] Linking target lib/librte_ring.so.24.1 00:02:50.999 [236/267] Linking target lib/librte_pci.so.24.1 00:02:50.999 [237/267] Linking target lib/librte_meter.so.24.1 00:02:50.999 [238/267] Linking target lib/librte_timer.so.24.1 00:02:50.999 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:50.999 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:50.999 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:50.999 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:50.999 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:50.999 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:50.999 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:50.999 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:50.999 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:50.999 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:51.257 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.257 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.257 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.257 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:51.257 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.257 [254/267] Linking target lib/librte_net.so.24.1 00:02:51.257 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:51.257 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:51.257 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:51.516 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.516 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.516 [260/267] Linking target lib/librte_hash.so.24.1 00:02:51.516 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:51.516 [262/267] Linking target lib/librte_security.so.24.1 00:02:51.516 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:51.516 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:51.516 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:51.516 [266/267] Linking target lib/librte_power.so.24.1 00:02:51.516 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:51.516 INFO: autodetecting backend as ninja 00:02:51.516 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:03.724 CC lib/ut/ut.o 00:03:03.724 CC lib/log/log.o 00:03:03.724 CC lib/log/log_flags.o 00:03:03.724 CC lib/log/log_deprecated.o 00:03:03.724 CC lib/ut_mock/mock.o 00:03:03.724 LIB libspdk_ut_mock.a 00:03:03.724 LIB libspdk_log.a 00:03:03.724 SO libspdk_ut_mock.so.6.0 00:03:03.724 LIB libspdk_ut.a 00:03:03.724 SO libspdk_log.so.7.1 00:03:03.724 SO libspdk_ut.so.2.0 00:03:03.724 SYMLINK libspdk_ut_mock.so 00:03:03.724 SYMLINK libspdk_ut.so 00:03:03.724 SYMLINK libspdk_log.so 00:03:03.724 CC lib/util/base64.o 00:03:03.724 CC lib/util/bit_array.o 00:03:03.724 CC lib/util/cpuset.o 00:03:03.724 CC lib/util/crc16.o 00:03:03.724 CC lib/util/crc32c.o 00:03:03.724 CC lib/util/crc32.o 00:03:03.724 CC lib/util/crc32_ieee.o 00:03:03.724 CC lib/util/crc64.o 00:03:03.724 CC lib/util/dif.o 00:03:03.724 CC lib/ioat/ioat.o 00:03:03.724 CC lib/util/file.o 00:03:03.724 CC lib/util/fd.o 00:03:03.724 CC lib/util/fd_group.o 00:03:03.724 CC lib/util/iov.o 00:03:03.724 CC lib/util/hexlify.o 00:03:03.724 CC lib/util/net.o 00:03:03.724 CC lib/util/math.o 00:03:03.724 CC lib/util/pipe.o 00:03:03.724 CC lib/util/strerror_tls.o 00:03:03.724 CC lib/dma/dma.o 00:03:03.724 CC lib/util/string.o 00:03:03.724 CC lib/util/uuid.o 00:03:03.724 CXX lib/trace_parser/trace.o 00:03:03.724 CC lib/util/xor.o 00:03:03.724 CC lib/util/zipf.o 00:03:03.724 CC lib/util/md5.o 00:03:03.724 CC lib/vfio_user/host/vfio_user.o 00:03:03.724 CC lib/vfio_user/host/vfio_user_pci.o 00:03:03.724 LIB libspdk_dma.a 00:03:03.724 SO libspdk_dma.so.5.0 00:03:03.724 SYMLINK libspdk_dma.so 00:03:03.724 LIB libspdk_ioat.a 00:03:03.724 SO libspdk_ioat.so.7.0 00:03:03.724 LIB libspdk_vfio_user.a 00:03:03.724 SYMLINK libspdk_ioat.so 00:03:03.724 SO libspdk_vfio_user.so.5.0 00:03:03.724 SYMLINK libspdk_vfio_user.so 00:03:03.724 LIB libspdk_util.a 00:03:03.724 SO libspdk_util.so.10.1 00:03:03.724 SYMLINK libspdk_util.so 00:03:03.724 LIB libspdk_trace_parser.a 00:03:03.724 SO libspdk_trace_parser.so.6.0 00:03:03.724 CC lib/rdma_utils/rdma_utils.o 00:03:03.724 CC lib/vmd/vmd.o 00:03:03.724 CC lib/vmd/led.o 00:03:03.724 CC lib/idxd/idxd.o 00:03:03.724 CC lib/conf/conf.o 00:03:03.724 CC lib/idxd/idxd_user.o 00:03:03.724 CC lib/idxd/idxd_kernel.o 00:03:03.724 CC lib/json/json_util.o 00:03:03.724 CC lib/json/json_write.o 00:03:03.724 CC lib/json/json_parse.o 00:03:03.724 CC lib/env_dpdk/env.o 00:03:03.724 CC lib/env_dpdk/memory.o 00:03:03.724 CC lib/env_dpdk/pci.o 00:03:03.724 CC lib/env_dpdk/init.o 00:03:03.724 CC lib/env_dpdk/threads.o 00:03:03.724 CC lib/env_dpdk/pci_virtio.o 00:03:03.724 CC lib/env_dpdk/pci_ioat.o 00:03:03.724 CC lib/env_dpdk/pci_vmd.o 00:03:03.724 CC lib/env_dpdk/pci_idxd.o 00:03:03.724 CC lib/env_dpdk/sigbus_handler.o 00:03:03.724 CC lib/env_dpdk/pci_event.o 00:03:03.724 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.724 CC lib/env_dpdk/pci_dpdk.o 00:03:03.724 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.724 SYMLINK libspdk_trace_parser.so 00:03:03.724 LIB libspdk_conf.a 00:03:03.724 SO libspdk_conf.so.6.0 00:03:03.724 LIB libspdk_json.a 00:03:03.724 LIB libspdk_rdma_utils.a 00:03:03.724 SO libspdk_json.so.6.0 00:03:03.724 SO libspdk_rdma_utils.so.1.0 00:03:03.724 SYMLINK libspdk_conf.so 00:03:03.724 SYMLINK libspdk_rdma_utils.so 00:03:03.724 SYMLINK libspdk_json.so 00:03:03.724 LIB libspdk_idxd.a 00:03:03.724 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.724 CC lib/rdma_provider/common.o 00:03:03.725 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.725 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.725 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.725 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.725 SO libspdk_idxd.so.12.1 00:03:03.725 LIB libspdk_vmd.a 00:03:03.725 SO libspdk_vmd.so.6.0 00:03:03.725 SYMLINK libspdk_idxd.so 00:03:03.725 SYMLINK libspdk_vmd.so 00:03:03.725 LIB libspdk_rdma_provider.a 00:03:03.725 SO libspdk_rdma_provider.so.7.0 00:03:03.725 LIB libspdk_jsonrpc.a 00:03:03.725 SO libspdk_jsonrpc.so.6.0 00:03:03.725 SYMLINK libspdk_rdma_provider.so 00:03:03.725 SYMLINK libspdk_jsonrpc.so 00:03:04.000 CC lib/rpc/rpc.o 00:03:04.000 LIB libspdk_env_dpdk.a 00:03:04.000 LIB libspdk_rpc.a 00:03:04.000 SO libspdk_rpc.so.6.0 00:03:04.000 SO libspdk_env_dpdk.so.15.1 00:03:04.259 SYMLINK libspdk_rpc.so 00:03:04.259 SYMLINK libspdk_env_dpdk.so 00:03:04.259 CC lib/notify/notify.o 00:03:04.259 CC lib/notify/notify_rpc.o 00:03:04.259 CC lib/keyring/keyring.o 00:03:04.259 CC lib/keyring/keyring_rpc.o 00:03:04.259 CC lib/trace/trace.o 00:03:04.259 CC lib/trace/trace_flags.o 00:03:04.259 CC lib/trace/trace_rpc.o 00:03:04.519 LIB libspdk_notify.a 00:03:04.519 SO libspdk_notify.so.6.0 00:03:04.519 LIB libspdk_keyring.a 00:03:04.519 SYMLINK libspdk_notify.so 00:03:04.519 SO libspdk_keyring.so.2.0 00:03:04.519 LIB libspdk_trace.a 00:03:04.519 SO libspdk_trace.so.11.0 00:03:04.519 SYMLINK libspdk_keyring.so 00:03:04.778 SYMLINK libspdk_trace.so 00:03:04.778 CC lib/thread/thread.o 00:03:04.778 CC lib/thread/iobuf.o 00:03:04.778 CC lib/sock/sock.o 00:03:04.778 CC lib/sock/sock_rpc.o 00:03:05.346 LIB libspdk_sock.a 00:03:05.346 SO libspdk_sock.so.10.0 00:03:05.346 SYMLINK libspdk_sock.so 00:03:05.606 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.606 CC lib/nvme/nvme_ctrlr.o 00:03:05.606 CC lib/nvme/nvme_fabric.o 00:03:05.606 CC lib/nvme/nvme_ns_cmd.o 00:03:05.606 CC lib/nvme/nvme_ns.o 00:03:05.606 CC lib/nvme/nvme_pcie.o 00:03:05.606 CC lib/nvme/nvme_pcie_common.o 00:03:05.606 CC lib/nvme/nvme.o 00:03:05.606 CC lib/nvme/nvme_qpair.o 00:03:05.606 CC lib/nvme/nvme_quirks.o 00:03:05.606 CC lib/nvme/nvme_transport.o 00:03:05.606 CC lib/nvme/nvme_discovery.o 00:03:05.606 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.606 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.606 CC lib/nvme/nvme_tcp.o 00:03:05.606 CC lib/nvme/nvme_opal.o 00:03:05.606 CC lib/nvme/nvme_zns.o 00:03:05.606 CC lib/nvme/nvme_io_msg.o 00:03:05.606 CC lib/nvme/nvme_poll_group.o 00:03:05.606 CC lib/nvme/nvme_stubs.o 00:03:05.606 CC lib/nvme/nvme_auth.o 00:03:05.606 CC lib/nvme/nvme_cuse.o 00:03:05.606 CC lib/nvme/nvme_vfio_user.o 00:03:05.606 CC lib/nvme/nvme_rdma.o 00:03:06.173 LIB libspdk_thread.a 00:03:06.173 SO libspdk_thread.so.11.0 00:03:06.173 SYMLINK libspdk_thread.so 00:03:06.433 CC lib/fsdev/fsdev.o 00:03:06.433 CC lib/fsdev/fsdev_io.o 00:03:06.433 CC lib/blob/request.o 00:03:06.433 CC lib/blob/blobstore.o 00:03:06.433 CC lib/fsdev/fsdev_rpc.o 00:03:06.433 CC lib/blob/zeroes.o 00:03:06.433 CC lib/blob/blob_bs_dev.o 00:03:06.433 CC lib/virtio/virtio.o 00:03:06.433 CC lib/virtio/virtio_vfio_user.o 00:03:06.433 CC lib/virtio/virtio_vhost_user.o 00:03:06.433 CC lib/virtio/virtio_pci.o 00:03:06.433 CC lib/vfu_tgt/tgt_rpc.o 00:03:06.433 CC lib/vfu_tgt/tgt_endpoint.o 00:03:06.433 CC lib/init/json_config.o 00:03:06.433 CC lib/accel/accel_rpc.o 00:03:06.433 CC lib/init/subsystem_rpc.o 00:03:06.433 CC lib/init/rpc.o 00:03:06.433 CC lib/init/subsystem.o 00:03:06.433 CC lib/accel/accel.o 00:03:06.433 CC lib/accel/accel_sw.o 00:03:06.433 LIB libspdk_init.a 00:03:06.433 SO libspdk_init.so.6.0 00:03:06.692 SYMLINK libspdk_init.so 00:03:06.692 LIB libspdk_virtio.a 00:03:06.692 LIB libspdk_vfu_tgt.a 00:03:06.692 SO libspdk_virtio.so.7.0 00:03:06.693 SO libspdk_vfu_tgt.so.3.0 00:03:06.693 SYMLINK libspdk_vfu_tgt.so 00:03:06.693 SYMLINK libspdk_virtio.so 00:03:06.693 CC lib/event/app.o 00:03:06.693 CC lib/event/reactor.o 00:03:06.693 CC lib/event/log_rpc.o 00:03:06.693 CC lib/event/app_rpc.o 00:03:06.693 CC lib/event/scheduler_static.o 00:03:06.952 LIB libspdk_fsdev.a 00:03:06.952 SO libspdk_fsdev.so.2.0 00:03:06.952 SYMLINK libspdk_fsdev.so 00:03:07.212 LIB libspdk_event.a 00:03:07.212 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:07.212 SO libspdk_event.so.14.0 00:03:07.212 SYMLINK libspdk_event.so 00:03:07.212 LIB libspdk_accel.a 00:03:07.212 SO libspdk_accel.so.16.0 00:03:07.212 LIB libspdk_nvme.a 00:03:07.212 SYMLINK libspdk_accel.so 00:03:07.471 SO libspdk_nvme.so.15.0 00:03:07.471 CC lib/bdev/bdev.o 00:03:07.471 CC lib/bdev/bdev_rpc.o 00:03:07.471 CC lib/bdev/bdev_zone.o 00:03:07.471 CC lib/bdev/part.o 00:03:07.471 CC lib/bdev/scsi_nvme.o 00:03:07.471 SYMLINK libspdk_nvme.so 00:03:07.730 LIB libspdk_fuse_dispatcher.a 00:03:07.730 SO libspdk_fuse_dispatcher.so.1.0 00:03:07.730 SYMLINK libspdk_fuse_dispatcher.so 00:03:08.298 LIB libspdk_blob.a 00:03:08.298 SO libspdk_blob.so.12.0 00:03:08.298 SYMLINK libspdk_blob.so 00:03:08.557 CC lib/lvol/lvol.o 00:03:08.557 CC lib/blobfs/blobfs.o 00:03:08.557 CC lib/blobfs/tree.o 00:03:09.126 LIB libspdk_blobfs.a 00:03:09.126 SO libspdk_blobfs.so.11.0 00:03:09.126 SYMLINK libspdk_blobfs.so 00:03:09.126 LIB libspdk_lvol.a 00:03:09.385 SO libspdk_lvol.so.11.0 00:03:09.385 SYMLINK libspdk_lvol.so 00:03:09.644 LIB libspdk_bdev.a 00:03:09.644 SO libspdk_bdev.so.17.0 00:03:09.645 SYMLINK libspdk_bdev.so 00:03:09.905 CC lib/scsi/dev.o 00:03:09.905 CC lib/ublk/ublk.o 00:03:09.905 CC lib/ublk/ublk_rpc.o 00:03:09.905 CC lib/scsi/lun.o 00:03:09.905 CC lib/nbd/nbd_rpc.o 00:03:09.905 CC lib/nbd/nbd.o 00:03:09.905 CC lib/scsi/port.o 00:03:09.905 CC lib/nvmf/ctrlr.o 00:03:09.905 CC lib/scsi/scsi.o 00:03:09.906 CC lib/scsi/scsi_bdev.o 00:03:09.906 CC lib/nvmf/ctrlr_discovery.o 00:03:09.906 CC lib/ftl/ftl_core.o 00:03:09.906 CC lib/ftl/ftl_init.o 00:03:09.906 CC lib/scsi/scsi_pr.o 00:03:09.906 CC lib/nvmf/ctrlr_bdev.o 00:03:09.906 CC lib/scsi/scsi_rpc.o 00:03:09.906 CC lib/ftl/ftl_debug.o 00:03:09.906 CC lib/nvmf/subsystem.o 00:03:09.906 CC lib/scsi/task.o 00:03:09.906 CC lib/ftl/ftl_layout.o 00:03:09.906 CC lib/nvmf/nvmf_rpc.o 00:03:09.906 CC lib/nvmf/nvmf.o 00:03:09.906 CC lib/nvmf/transport.o 00:03:09.906 CC lib/ftl/ftl_io.o 00:03:09.906 CC lib/nvmf/tcp.o 00:03:09.906 CC lib/nvmf/stubs.o 00:03:09.906 CC lib/nvmf/mdns_server.o 00:03:09.906 CC lib/ftl/ftl_sb.o 00:03:09.906 CC lib/ftl/ftl_l2p.o 00:03:09.906 CC lib/nvmf/vfio_user.o 00:03:09.906 CC lib/nvmf/rdma.o 00:03:09.906 CC lib/ftl/ftl_l2p_flat.o 00:03:09.906 CC lib/ftl/ftl_nv_cache.o 00:03:09.906 CC lib/nvmf/auth.o 00:03:09.906 CC lib/ftl/ftl_band.o 00:03:09.906 CC lib/ftl/ftl_band_ops.o 00:03:09.906 CC lib/ftl/ftl_writer.o 00:03:09.906 CC lib/ftl/ftl_rq.o 00:03:09.906 CC lib/ftl/ftl_reloc.o 00:03:09.906 CC lib/ftl/ftl_l2p_cache.o 00:03:09.906 CC lib/ftl/ftl_p2l.o 00:03:09.906 CC lib/ftl/ftl_p2l_log.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.906 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.906 CC lib/ftl/utils/ftl_conf.o 00:03:09.906 CC lib/ftl/utils/ftl_md.o 00:03:09.906 CC lib/ftl/utils/ftl_mempool.o 00:03:09.906 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.906 CC lib/ftl/utils/ftl_property.o 00:03:09.906 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.906 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.906 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.906 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.906 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.906 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.906 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.906 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.906 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.906 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.906 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.906 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:09.906 CC lib/ftl/base/ftl_base_dev.o 00:03:09.906 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:09.906 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.906 CC lib/ftl/ftl_trace.o 00:03:10.474 LIB libspdk_nbd.a 00:03:10.474 SO libspdk_nbd.so.7.0 00:03:10.474 LIB libspdk_scsi.a 00:03:10.474 SYMLINK libspdk_nbd.so 00:03:10.474 SO libspdk_scsi.so.9.0 00:03:10.474 SYMLINK libspdk_scsi.so 00:03:10.474 LIB libspdk_ublk.a 00:03:10.733 SO libspdk_ublk.so.3.0 00:03:10.733 SYMLINK libspdk_ublk.so 00:03:10.733 CC lib/vhost/vhost.o 00:03:10.733 CC lib/iscsi/conn.o 00:03:10.733 CC lib/iscsi/init_grp.o 00:03:10.733 CC lib/iscsi/param.o 00:03:10.733 CC lib/vhost/vhost_scsi.o 00:03:10.733 CC lib/vhost/vhost_rpc.o 00:03:10.733 CC lib/iscsi/iscsi.o 00:03:10.733 CC lib/vhost/rte_vhost_user.o 00:03:10.733 CC lib/iscsi/portal_grp.o 00:03:10.733 CC lib/iscsi/tgt_node.o 00:03:10.733 CC lib/vhost/vhost_blk.o 00:03:10.733 CC lib/iscsi/iscsi_subsystem.o 00:03:10.733 CC lib/iscsi/task.o 00:03:10.733 CC lib/iscsi/iscsi_rpc.o 00:03:10.991 LIB libspdk_ftl.a 00:03:10.991 SO libspdk_ftl.so.9.0 00:03:11.250 SYMLINK libspdk_ftl.so 00:03:11.250 LIB libspdk_nvmf.a 00:03:11.509 SO libspdk_nvmf.so.20.0 00:03:11.509 LIB libspdk_vhost.a 00:03:11.509 SO libspdk_vhost.so.8.0 00:03:11.509 SYMLINK libspdk_nvmf.so 00:03:11.509 SYMLINK libspdk_vhost.so 00:03:11.769 LIB libspdk_iscsi.a 00:03:11.769 SO libspdk_iscsi.so.8.0 00:03:12.028 SYMLINK libspdk_iscsi.so 00:03:12.288 CC module/vfu_device/vfu_virtio.o 00:03:12.288 CC module/vfu_device/vfu_virtio_scsi.o 00:03:12.288 CC module/vfu_device/vfu_virtio_rpc.o 00:03:12.288 CC module/vfu_device/vfu_virtio_blk.o 00:03:12.288 CC module/vfu_device/vfu_virtio_fs.o 00:03:12.288 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.288 CC module/accel/dsa/accel_dsa.o 00:03:12.288 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.288 CC module/blob/bdev/blob_bdev.o 00:03:12.288 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.288 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.288 CC module/accel/ioat/accel_ioat.o 00:03:12.288 CC module/accel/iaa/accel_iaa.o 00:03:12.288 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.288 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.288 CC module/keyring/linux/keyring.o 00:03:12.288 CC module/sock/posix/posix.o 00:03:12.288 CC module/keyring/linux/keyring_rpc.o 00:03:12.288 CC module/accel/error/accel_error.o 00:03:12.288 CC module/accel/error/accel_error_rpc.o 00:03:12.288 CC module/keyring/file/keyring.o 00:03:12.288 CC module/fsdev/aio/fsdev_aio.o 00:03:12.288 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:12.288 CC module/keyring/file/keyring_rpc.o 00:03:12.288 CC module/fsdev/aio/linux_aio_mgr.o 00:03:12.288 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.288 LIB libspdk_env_dpdk_rpc.a 00:03:12.288 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.288 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.288 LIB libspdk_keyring_linux.a 00:03:12.288 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.546 SO libspdk_keyring_linux.so.1.0 00:03:12.546 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.546 LIB libspdk_accel_iaa.a 00:03:12.546 LIB libspdk_keyring_file.a 00:03:12.546 SO libspdk_accel_iaa.so.3.0 00:03:12.546 LIB libspdk_scheduler_gscheduler.a 00:03:12.546 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.546 LIB libspdk_accel_ioat.a 00:03:12.546 SYMLINK libspdk_keyring_linux.so 00:03:12.546 SO libspdk_keyring_file.so.2.0 00:03:12.546 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.546 LIB libspdk_scheduler_dynamic.a 00:03:12.546 LIB libspdk_accel_dsa.a 00:03:12.546 SO libspdk_accel_ioat.so.6.0 00:03:12.546 LIB libspdk_accel_error.a 00:03:12.546 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.546 SYMLINK libspdk_accel_iaa.so 00:03:12.546 SO libspdk_accel_dsa.so.5.0 00:03:12.546 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.546 SO libspdk_accel_error.so.2.0 00:03:12.546 SYMLINK libspdk_keyring_file.so 00:03:12.546 SYMLINK libspdk_accel_ioat.so 00:03:12.546 LIB libspdk_blob_bdev.a 00:03:12.546 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.546 SYMLINK libspdk_accel_dsa.so 00:03:12.546 SO libspdk_blob_bdev.so.12.0 00:03:12.546 SYMLINK libspdk_accel_error.so 00:03:12.546 SYMLINK libspdk_blob_bdev.so 00:03:12.546 LIB libspdk_vfu_device.a 00:03:12.546 SO libspdk_vfu_device.so.3.0 00:03:12.805 SYMLINK libspdk_vfu_device.so 00:03:12.805 LIB libspdk_fsdev_aio.a 00:03:12.805 SO libspdk_fsdev_aio.so.1.0 00:03:12.805 LIB libspdk_sock_posix.a 00:03:12.805 SO libspdk_sock_posix.so.6.0 00:03:12.805 SYMLINK libspdk_fsdev_aio.so 00:03:12.805 SYMLINK libspdk_sock_posix.so 00:03:12.805 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.805 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:12.805 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.805 CC module/blobfs/bdev/blobfs_bdev.o 00:03:12.805 CC module/bdev/split/vbdev_split.o 00:03:12.805 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.805 CC module/bdev/delay/vbdev_delay.o 00:03:12.805 CC module/bdev/error/vbdev_error.o 00:03:12.805 CC module/bdev/ftl/bdev_ftl.o 00:03:12.805 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:12.805 CC module/bdev/error/vbdev_error_rpc.o 00:03:12.805 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.805 CC module/bdev/nvme/bdev_nvme.o 00:03:12.805 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.805 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.805 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.805 CC module/bdev/nvme/nvme_rpc.o 00:03:12.805 CC module/bdev/aio/bdev_aio.o 00:03:12.805 CC module/bdev/lvol/vbdev_lvol.o 00:03:12.805 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.805 CC module/bdev/malloc/bdev_malloc.o 00:03:12.805 CC module/bdev/gpt/gpt.o 00:03:12.805 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.805 CC module/bdev/null/bdev_null.o 00:03:12.805 CC module/bdev/raid/bdev_raid.o 00:03:12.805 CC module/bdev/nvme/vbdev_opal.o 00:03:12.805 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.805 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.805 CC module/bdev/passthru/vbdev_passthru.o 00:03:12.805 CC module/bdev/gpt/vbdev_gpt.o 00:03:12.805 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:12.805 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:12.805 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.805 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.805 CC module/bdev/null/bdev_null_rpc.o 00:03:12.805 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.805 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:12.805 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.805 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.805 CC module/bdev/raid/raid0.o 00:03:12.805 CC module/bdev/raid/concat.o 00:03:12.805 CC module/bdev/raid/raid1.o 00:03:13.064 LIB libspdk_bdev_split.a 00:03:13.064 LIB libspdk_blobfs_bdev.a 00:03:13.064 SO libspdk_blobfs_bdev.so.6.0 00:03:13.064 SO libspdk_bdev_split.so.6.0 00:03:13.064 LIB libspdk_bdev_ftl.a 00:03:13.064 SYMLINK libspdk_blobfs_bdev.so 00:03:13.064 SYMLINK libspdk_bdev_split.so 00:03:13.064 SO libspdk_bdev_ftl.so.6.0 00:03:13.064 LIB libspdk_bdev_null.a 00:03:13.064 LIB libspdk_bdev_error.a 00:03:13.064 LIB libspdk_bdev_malloc.a 00:03:13.064 LIB libspdk_bdev_gpt.a 00:03:13.064 SO libspdk_bdev_error.so.6.0 00:03:13.064 SO libspdk_bdev_null.so.6.0 00:03:13.064 SO libspdk_bdev_gpt.so.6.0 00:03:13.064 SYMLINK libspdk_bdev_ftl.so 00:03:13.064 SO libspdk_bdev_malloc.so.6.0 00:03:13.064 LIB libspdk_bdev_passthru.a 00:03:13.064 LIB libspdk_bdev_zone_block.a 00:03:13.064 SO libspdk_bdev_passthru.so.6.0 00:03:13.064 SYMLINK libspdk_bdev_null.so 00:03:13.064 SYMLINK libspdk_bdev_error.so 00:03:13.064 SO libspdk_bdev_zone_block.so.6.0 00:03:13.064 SYMLINK libspdk_bdev_gpt.so 00:03:13.064 SYMLINK libspdk_bdev_malloc.so 00:03:13.064 LIB libspdk_bdev_aio.a 00:03:13.064 LIB libspdk_bdev_delay.a 00:03:13.323 LIB libspdk_bdev_iscsi.a 00:03:13.323 SO libspdk_bdev_aio.so.6.0 00:03:13.323 SO libspdk_bdev_delay.so.6.0 00:03:13.323 SYMLINK libspdk_bdev_passthru.so 00:03:13.323 SO libspdk_bdev_iscsi.so.6.0 00:03:13.323 SYMLINK libspdk_bdev_zone_block.so 00:03:13.323 SYMLINK libspdk_bdev_aio.so 00:03:13.323 SYMLINK libspdk_bdev_delay.so 00:03:13.323 SYMLINK libspdk_bdev_iscsi.so 00:03:13.323 LIB libspdk_bdev_virtio.a 00:03:13.323 LIB libspdk_bdev_lvol.a 00:03:13.323 SO libspdk_bdev_virtio.so.6.0 00:03:13.323 SO libspdk_bdev_lvol.so.6.0 00:03:13.323 SYMLINK libspdk_bdev_virtio.so 00:03:13.323 SYMLINK libspdk_bdev_lvol.so 00:03:13.582 LIB libspdk_bdev_raid.a 00:03:13.583 SO libspdk_bdev_raid.so.6.0 00:03:13.844 SYMLINK libspdk_bdev_raid.so 00:03:14.783 LIB libspdk_bdev_nvme.a 00:03:14.783 SO libspdk_bdev_nvme.so.7.1 00:03:15.042 SYMLINK libspdk_bdev_nvme.so 00:03:15.300 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.300 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.300 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:15.300 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.300 CC module/event/subsystems/sock/sock.o 00:03:15.300 CC module/event/subsystems/vmd/vmd.o 00:03:15.300 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.300 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.300 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.300 CC module/event/subsystems/keyring/keyring.o 00:03:15.300 LIB libspdk_event_scheduler.a 00:03:15.300 LIB libspdk_event_sock.a 00:03:15.300 LIB libspdk_event_fsdev.a 00:03:15.300 SO libspdk_event_scheduler.so.4.0 00:03:15.300 SO libspdk_event_fsdev.so.1.0 00:03:15.300 LIB libspdk_event_vfu_tgt.a 00:03:15.300 SO libspdk_event_sock.so.5.0 00:03:15.559 LIB libspdk_event_keyring.a 00:03:15.559 LIB libspdk_event_vhost_blk.a 00:03:15.559 LIB libspdk_event_vmd.a 00:03:15.559 LIB libspdk_event_iobuf.a 00:03:15.559 SO libspdk_event_vfu_tgt.so.3.0 00:03:15.559 SO libspdk_event_keyring.so.1.0 00:03:15.559 SO libspdk_event_vhost_blk.so.3.0 00:03:15.559 SO libspdk_event_iobuf.so.3.0 00:03:15.559 SO libspdk_event_vmd.so.6.0 00:03:15.559 SYMLINK libspdk_event_scheduler.so 00:03:15.559 SYMLINK libspdk_event_fsdev.so 00:03:15.559 SYMLINK libspdk_event_sock.so 00:03:15.559 SYMLINK libspdk_event_vfu_tgt.so 00:03:15.559 SYMLINK libspdk_event_vhost_blk.so 00:03:15.559 SYMLINK libspdk_event_keyring.so 00:03:15.559 SYMLINK libspdk_event_iobuf.so 00:03:15.559 SYMLINK libspdk_event_vmd.so 00:03:15.559 CC module/event/subsystems/accel/accel.o 00:03:15.818 LIB libspdk_event_accel.a 00:03:15.818 SO libspdk_event_accel.so.6.0 00:03:15.818 SYMLINK libspdk_event_accel.so 00:03:16.077 CC module/event/subsystems/bdev/bdev.o 00:03:16.077 LIB libspdk_event_bdev.a 00:03:16.077 SO libspdk_event_bdev.so.6.0 00:03:16.334 SYMLINK libspdk_event_bdev.so 00:03:16.334 CC module/event/subsystems/scsi/scsi.o 00:03:16.334 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.334 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.334 CC module/event/subsystems/nbd/nbd.o 00:03:16.334 CC module/event/subsystems/ublk/ublk.o 00:03:16.593 LIB libspdk_event_ublk.a 00:03:16.593 SO libspdk_event_ublk.so.3.0 00:03:16.593 LIB libspdk_event_scsi.a 00:03:16.593 LIB libspdk_event_nbd.a 00:03:16.593 SO libspdk_event_scsi.so.6.0 00:03:16.593 SYMLINK libspdk_event_ublk.so 00:03:16.593 SO libspdk_event_nbd.so.6.0 00:03:16.593 LIB libspdk_event_nvmf.a 00:03:16.593 SO libspdk_event_nvmf.so.6.0 00:03:16.593 SYMLINK libspdk_event_scsi.so 00:03:16.593 SYMLINK libspdk_event_nbd.so 00:03:16.593 SYMLINK libspdk_event_nvmf.so 00:03:16.853 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.853 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.853 LIB libspdk_event_iscsi.a 00:03:16.853 LIB libspdk_event_vhost_scsi.a 00:03:16.853 SO libspdk_event_iscsi.so.6.0 00:03:16.853 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.853 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.853 SYMLINK libspdk_event_iscsi.so 00:03:17.111 SO libspdk.so.6.0 00:03:17.111 SYMLINK libspdk.so 00:03:17.111 CC app/trace_record/trace_record.o 00:03:17.111 CC app/spdk_lspci/spdk_lspci.o 00:03:17.111 CC app/spdk_nvme_identify/identify.o 00:03:17.111 CC app/spdk_nvme_perf/perf.o 00:03:17.111 CC test/rpc_client/rpc_client_test.o 00:03:17.111 TEST_HEADER include/spdk/accel.h 00:03:17.111 TEST_HEADER include/spdk/accel_module.h 00:03:17.111 TEST_HEADER include/spdk/assert.h 00:03:17.111 CC app/spdk_top/spdk_top.o 00:03:17.111 TEST_HEADER include/spdk/barrier.h 00:03:17.111 CC app/spdk_nvme_discover/discovery_aer.o 00:03:17.111 TEST_HEADER include/spdk/base64.h 00:03:17.111 TEST_HEADER include/spdk/bdev.h 00:03:17.111 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.111 TEST_HEADER include/spdk/bdev_module.h 00:03:17.111 TEST_HEADER include/spdk/bit_array.h 00:03:17.111 TEST_HEADER include/spdk/bit_pool.h 00:03:17.111 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.111 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.111 CXX app/trace/trace.o 00:03:17.111 TEST_HEADER include/spdk/blobfs.h 00:03:17.111 TEST_HEADER include/spdk/blob.h 00:03:17.111 TEST_HEADER include/spdk/config.h 00:03:17.111 TEST_HEADER include/spdk/conf.h 00:03:17.111 TEST_HEADER include/spdk/cpuset.h 00:03:17.111 TEST_HEADER include/spdk/crc16.h 00:03:17.111 TEST_HEADER include/spdk/crc32.h 00:03:17.111 TEST_HEADER include/spdk/crc64.h 00:03:17.111 TEST_HEADER include/spdk/dif.h 00:03:17.111 TEST_HEADER include/spdk/dma.h 00:03:17.111 TEST_HEADER include/spdk/endian.h 00:03:17.111 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.111 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:17.111 TEST_HEADER include/spdk/env.h 00:03:17.111 TEST_HEADER include/spdk/event.h 00:03:17.111 TEST_HEADER include/spdk/fd_group.h 00:03:17.111 TEST_HEADER include/spdk/fd.h 00:03:17.111 TEST_HEADER include/spdk/file.h 00:03:17.111 TEST_HEADER include/spdk/fsdev.h 00:03:17.111 TEST_HEADER include/spdk/fsdev_module.h 00:03:17.111 TEST_HEADER include/spdk/ftl.h 00:03:17.111 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:17.111 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.111 TEST_HEADER include/spdk/hexlify.h 00:03:17.111 TEST_HEADER include/spdk/histogram_data.h 00:03:17.111 TEST_HEADER include/spdk/idxd.h 00:03:17.111 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.374 TEST_HEADER include/spdk/init.h 00:03:17.374 TEST_HEADER include/spdk/ioat.h 00:03:17.374 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.374 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.374 TEST_HEADER include/spdk/json.h 00:03:17.374 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.374 TEST_HEADER include/spdk/keyring.h 00:03:17.374 TEST_HEADER include/spdk/keyring_module.h 00:03:17.375 TEST_HEADER include/spdk/likely.h 00:03:17.375 TEST_HEADER include/spdk/log.h 00:03:17.375 TEST_HEADER include/spdk/lvol.h 00:03:17.375 TEST_HEADER include/spdk/md5.h 00:03:17.375 TEST_HEADER include/spdk/memory.h 00:03:17.375 TEST_HEADER include/spdk/mmio.h 00:03:17.375 TEST_HEADER include/spdk/nbd.h 00:03:17.375 TEST_HEADER include/spdk/notify.h 00:03:17.375 TEST_HEADER include/spdk/net.h 00:03:17.375 TEST_HEADER include/spdk/nvme.h 00:03:17.375 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.375 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.375 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.375 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.375 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.375 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.375 TEST_HEADER include/spdk/nvmf.h 00:03:17.375 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.375 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.375 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.375 TEST_HEADER include/spdk/opal.h 00:03:17.375 TEST_HEADER include/spdk/opal_spec.h 00:03:17.375 TEST_HEADER include/spdk/pci_ids.h 00:03:17.375 TEST_HEADER include/spdk/pipe.h 00:03:17.375 CC app/iscsi_tgt/iscsi_tgt.o 00:03:17.375 TEST_HEADER include/spdk/queue.h 00:03:17.375 TEST_HEADER include/spdk/reduce.h 00:03:17.375 TEST_HEADER include/spdk/rpc.h 00:03:17.375 TEST_HEADER include/spdk/scheduler.h 00:03:17.375 CC app/nvmf_tgt/nvmf_main.o 00:03:17.375 TEST_HEADER include/spdk/scsi.h 00:03:17.375 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.375 TEST_HEADER include/spdk/sock.h 00:03:17.375 CC app/spdk_dd/spdk_dd.o 00:03:17.375 TEST_HEADER include/spdk/stdinc.h 00:03:17.375 TEST_HEADER include/spdk/string.h 00:03:17.375 TEST_HEADER include/spdk/thread.h 00:03:17.375 TEST_HEADER include/spdk/trace.h 00:03:17.375 TEST_HEADER include/spdk/trace_parser.h 00:03:17.375 TEST_HEADER include/spdk/tree.h 00:03:17.375 TEST_HEADER include/spdk/util.h 00:03:17.375 TEST_HEADER include/spdk/ublk.h 00:03:17.375 TEST_HEADER include/spdk/uuid.h 00:03:17.375 TEST_HEADER include/spdk/version.h 00:03:17.375 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.375 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.375 TEST_HEADER include/spdk/vhost.h 00:03:17.375 TEST_HEADER include/spdk/vmd.h 00:03:17.375 TEST_HEADER include/spdk/zipf.h 00:03:17.375 TEST_HEADER include/spdk/xor.h 00:03:17.375 CXX test/cpp_headers/accel.o 00:03:17.375 CXX test/cpp_headers/assert.o 00:03:17.375 CXX test/cpp_headers/accel_module.o 00:03:17.375 CXX test/cpp_headers/barrier.o 00:03:17.375 CXX test/cpp_headers/base64.o 00:03:17.375 CXX test/cpp_headers/bdev.o 00:03:17.375 CXX test/cpp_headers/bdev_module.o 00:03:17.375 CXX test/cpp_headers/bdev_zone.o 00:03:17.375 CXX test/cpp_headers/bit_array.o 00:03:17.375 CXX test/cpp_headers/bit_pool.o 00:03:17.375 CXX test/cpp_headers/blob_bdev.o 00:03:17.375 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.375 CXX test/cpp_headers/blobfs.o 00:03:17.375 CXX test/cpp_headers/blob.o 00:03:17.375 CXX test/cpp_headers/config.o 00:03:17.375 CXX test/cpp_headers/crc16.o 00:03:17.375 CXX test/cpp_headers/conf.o 00:03:17.375 CXX test/cpp_headers/cpuset.o 00:03:17.375 CXX test/cpp_headers/crc32.o 00:03:17.375 CXX test/cpp_headers/crc64.o 00:03:17.375 CXX test/cpp_headers/dif.o 00:03:17.375 CXX test/cpp_headers/dma.o 00:03:17.375 CXX test/cpp_headers/endian.o 00:03:17.375 CXX test/cpp_headers/env_dpdk.o 00:03:17.375 CXX test/cpp_headers/env.o 00:03:17.375 CC app/spdk_tgt/spdk_tgt.o 00:03:17.375 CXX test/cpp_headers/event.o 00:03:17.375 CXX test/cpp_headers/fd_group.o 00:03:17.375 CXX test/cpp_headers/fd.o 00:03:17.375 CXX test/cpp_headers/file.o 00:03:17.375 CXX test/cpp_headers/fsdev_module.o 00:03:17.375 CXX test/cpp_headers/fsdev.o 00:03:17.375 CXX test/cpp_headers/ftl.o 00:03:17.375 CXX test/cpp_headers/fuse_dispatcher.o 00:03:17.375 CXX test/cpp_headers/gpt_spec.o 00:03:17.375 CXX test/cpp_headers/hexlify.o 00:03:17.375 CXX test/cpp_headers/histogram_data.o 00:03:17.375 CXX test/cpp_headers/idxd.o 00:03:17.375 CXX test/cpp_headers/idxd_spec.o 00:03:17.375 CXX test/cpp_headers/init.o 00:03:17.375 CXX test/cpp_headers/ioat_spec.o 00:03:17.375 CXX test/cpp_headers/ioat.o 00:03:17.375 CXX test/cpp_headers/json.o 00:03:17.375 CXX test/cpp_headers/iscsi_spec.o 00:03:17.375 CXX test/cpp_headers/jsonrpc.o 00:03:17.375 CXX test/cpp_headers/keyring.o 00:03:17.375 CXX test/cpp_headers/log.o 00:03:17.375 CXX test/cpp_headers/likely.o 00:03:17.375 CXX test/cpp_headers/keyring_module.o 00:03:17.375 CXX test/cpp_headers/lvol.o 00:03:17.375 CXX test/cpp_headers/md5.o 00:03:17.375 CXX test/cpp_headers/memory.o 00:03:17.375 CXX test/cpp_headers/nbd.o 00:03:17.375 CXX test/cpp_headers/mmio.o 00:03:17.375 CXX test/cpp_headers/net.o 00:03:17.375 CXX test/cpp_headers/nvme.o 00:03:17.375 CXX test/cpp_headers/nvme_intel.o 00:03:17.375 CXX test/cpp_headers/notify.o 00:03:17.375 CXX test/cpp_headers/nvme_ocssd.o 00:03:17.375 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:17.375 CXX test/cpp_headers/nvme_spec.o 00:03:17.375 CC test/app/histogram_perf/histogram_perf.o 00:03:17.375 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:17.375 CXX test/cpp_headers/nvmf_cmd.o 00:03:17.375 CXX test/cpp_headers/nvme_zns.o 00:03:17.375 CXX test/cpp_headers/nvmf.o 00:03:17.375 CXX test/cpp_headers/nvmf_spec.o 00:03:17.375 CC test/app/stub/stub.o 00:03:17.375 CXX test/cpp_headers/opal_spec.o 00:03:17.375 CXX test/cpp_headers/nvmf_transport.o 00:03:17.375 CC test/env/vtophys/vtophys.o 00:03:17.375 CXX test/cpp_headers/opal.o 00:03:17.375 CXX test/cpp_headers/pci_ids.o 00:03:17.375 CC test/thread/poller_perf/poller_perf.o 00:03:17.375 CXX test/cpp_headers/queue.o 00:03:17.375 CXX test/cpp_headers/pipe.o 00:03:17.375 CC examples/util/zipf/zipf.o 00:03:17.375 CXX test/cpp_headers/reduce.o 00:03:17.375 CXX test/cpp_headers/rpc.o 00:03:17.375 CXX test/cpp_headers/scheduler.o 00:03:17.375 CXX test/cpp_headers/scsi_spec.o 00:03:17.375 CC test/app/jsoncat/jsoncat.o 00:03:17.375 CXX test/cpp_headers/scsi.o 00:03:17.375 CXX test/cpp_headers/sock.o 00:03:17.375 CC examples/ioat/verify/verify.o 00:03:17.375 CXX test/cpp_headers/thread.o 00:03:17.375 CXX test/cpp_headers/stdinc.o 00:03:17.375 CXX test/cpp_headers/string.o 00:03:17.375 CXX test/cpp_headers/trace.o 00:03:17.375 CXX test/cpp_headers/tree.o 00:03:17.375 CC test/env/pci/pci_ut.o 00:03:17.375 CXX test/cpp_headers/trace_parser.o 00:03:17.375 CXX test/cpp_headers/ublk.o 00:03:17.375 CXX test/cpp_headers/uuid.o 00:03:17.375 CXX test/cpp_headers/util.o 00:03:17.375 CC app/fio/nvme/fio_plugin.o 00:03:17.375 CXX test/cpp_headers/version.o 00:03:17.375 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.375 CC examples/ioat/perf/perf.o 00:03:17.375 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.375 CXX test/cpp_headers/vhost.o 00:03:17.375 CXX test/cpp_headers/vmd.o 00:03:17.375 CXX test/cpp_headers/xor.o 00:03:17.375 CXX test/cpp_headers/zipf.o 00:03:17.375 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.375 CC test/env/memory/memory_ut.o 00:03:17.375 CC test/app/bdev_svc/bdev_svc.o 00:03:17.375 CC app/fio/bdev/fio_plugin.o 00:03:17.375 CC test/dma/test_dma/test_dma.o 00:03:17.635 LINK spdk_lspci 00:03:17.635 LINK spdk_nvme_discover 00:03:17.635 LINK rpc_client_test 00:03:17.635 LINK nvmf_tgt 00:03:17.635 LINK vtophys 00:03:17.635 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.635 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.635 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.635 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.635 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.635 LINK interrupt_tgt 00:03:17.893 LINK iscsi_tgt 00:03:17.893 LINK jsoncat 00:03:17.893 LINK spdk_trace_record 00:03:17.893 LINK spdk_dd 00:03:17.893 LINK stub 00:03:17.893 LINK histogram_perf 00:03:17.893 LINK spdk_tgt 00:03:17.893 LINK poller_perf 00:03:17.893 LINK zipf 00:03:17.893 LINK verify 00:03:17.893 LINK bdev_svc 00:03:18.152 LINK env_dpdk_post_init 00:03:18.152 LINK ioat_perf 00:03:18.152 LINK pci_ut 00:03:18.152 LINK nvme_fuzz 00:03:18.152 CC test/event/reactor_perf/reactor_perf.o 00:03:18.152 CC test/event/reactor/reactor.o 00:03:18.152 CC test/event/event_perf/event_perf.o 00:03:18.152 LINK spdk_trace 00:03:18.152 CC examples/vmd/led/led.o 00:03:18.152 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.152 CC test/event/app_repeat/app_repeat.o 00:03:18.152 CC examples/idxd/perf/perf.o 00:03:18.152 CC examples/sock/hello_world/hello_sock.o 00:03:18.152 CC test/event/scheduler/scheduler.o 00:03:18.152 CC examples/thread/thread/thread_ex.o 00:03:18.152 LINK vhost_fuzz 00:03:18.413 LINK mem_callbacks 00:03:18.413 LINK spdk_bdev 00:03:18.413 LINK test_dma 00:03:18.413 LINK reactor_perf 00:03:18.413 LINK reactor 00:03:18.413 LINK spdk_nvme 00:03:18.413 LINK app_repeat 00:03:18.413 LINK lsvmd 00:03:18.413 LINK led 00:03:18.413 LINK event_perf 00:03:18.413 LINK spdk_nvme_identify 00:03:18.413 LINK spdk_nvme_perf 00:03:18.413 LINK hello_sock 00:03:18.413 LINK scheduler 00:03:18.413 CC app/vhost/vhost.o 00:03:18.413 LINK thread 00:03:18.413 LINK idxd_perf 00:03:18.413 LINK spdk_top 00:03:18.672 CC test/nvme/reset/reset.o 00:03:18.672 CC test/nvme/overhead/overhead.o 00:03:18.672 CC test/nvme/err_injection/err_injection.o 00:03:18.672 CC test/nvme/aer/aer.o 00:03:18.672 CC test/nvme/startup/startup.o 00:03:18.672 CC test/nvme/e2edp/nvme_dp.o 00:03:18.672 CC test/nvme/reserve/reserve.o 00:03:18.672 CC test/nvme/connect_stress/connect_stress.o 00:03:18.672 CC test/nvme/sgl/sgl.o 00:03:18.672 CC test/nvme/boot_partition/boot_partition.o 00:03:18.672 CC test/nvme/compliance/nvme_compliance.o 00:03:18.672 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.672 CC test/nvme/fdp/fdp.o 00:03:18.672 CC test/nvme/simple_copy/simple_copy.o 00:03:18.672 CC test/nvme/cuse/cuse.o 00:03:18.672 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.672 CC test/blobfs/mkfs/mkfs.o 00:03:18.672 CC test/accel/dif/dif.o 00:03:18.672 LINK vhost 00:03:18.672 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:18.672 CC examples/nvme/arbitration/arbitration.o 00:03:18.672 CC examples/nvme/hello_world/hello_world.o 00:03:18.672 LINK memory_ut 00:03:18.672 CC examples/nvme/abort/abort.o 00:03:18.672 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:18.672 CC examples/nvme/reconnect/reconnect.o 00:03:18.672 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:18.672 CC examples/nvme/hotplug/hotplug.o 00:03:18.672 CC test/lvol/esnap/esnap.o 00:03:18.672 LINK startup 00:03:18.672 LINK connect_stress 00:03:18.672 LINK doorbell_aers 00:03:18.672 LINK fused_ordering 00:03:18.672 LINK mkfs 00:03:18.672 LINK boot_partition 00:03:18.931 LINK err_injection 00:03:18.931 LINK nvme_dp 00:03:18.931 LINK pmr_persistence 00:03:18.931 LINK sgl 00:03:18.931 CC examples/accel/perf/accel_perf.o 00:03:18.931 LINK reserve 00:03:18.931 CC examples/blob/cli/blobcli.o 00:03:18.931 LINK overhead 00:03:18.931 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:18.931 LINK nvme_compliance 00:03:18.931 LINK simple_copy 00:03:18.931 CC examples/blob/hello_world/hello_blob.o 00:03:18.931 LINK iscsi_fuzz 00:03:18.931 LINK hotplug 00:03:18.931 LINK cmb_copy 00:03:18.931 LINK reset 00:03:18.931 LINK aer 00:03:18.931 LINK hello_world 00:03:18.931 LINK arbitration 00:03:18.931 LINK fdp 00:03:18.931 LINK abort 00:03:18.931 LINK reconnect 00:03:18.931 LINK hello_fsdev 00:03:18.931 LINK hello_blob 00:03:18.931 LINK nvme_manage 00:03:18.931 LINK dif 00:03:19.188 LINK blobcli 00:03:19.189 LINK accel_perf 00:03:19.446 CC test/bdev/bdevio/bdevio.o 00:03:19.446 LINK cuse 00:03:19.446 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.446 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.705 LINK bdevio 00:03:19.705 LINK hello_bdev 00:03:20.046 LINK bdevperf 00:03:20.338 CC examples/nvmf/nvmf/nvmf.o 00:03:20.597 LINK nvmf 00:03:21.976 LINK esnap 00:03:21.976 00:03:21.976 real 0m44.030s 00:03:21.976 user 6m24.777s 00:03:21.976 sys 3m34.077s 00:03:21.976 17:40:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.976 17:40:09 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.976 ************************************ 00:03:21.976 END TEST make 00:03:21.976 ************************************ 00:03:22.236 17:40:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.236 17:40:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.236 17:40:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.236 17:40:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.236 17:40:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.236 17:40:09 -- pm/common@44 -- $ pid=2685804 00:03:22.236 17:40:09 -- pm/common@50 -- $ kill -TERM 2685804 00:03:22.236 17:40:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.236 17:40:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.236 17:40:09 -- pm/common@44 -- $ pid=2685805 00:03:22.236 17:40:09 -- pm/common@50 -- $ kill -TERM 2685805 00:03:22.236 17:40:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.236 17:40:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:22.236 17:40:09 -- pm/common@44 -- $ pid=2685806 00:03:22.236 17:40:09 -- pm/common@50 -- $ kill -TERM 2685806 00:03:22.236 17:40:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.236 17:40:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:22.236 17:40:09 -- pm/common@44 -- $ pid=2685834 00:03:22.236 17:40:09 -- pm/common@50 -- $ sudo -E kill -TERM 2685834 00:03:22.236 17:40:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:22.236 17:40:09 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:22.236 17:40:09 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:22.236 17:40:09 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:22.236 17:40:09 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:22.236 17:40:09 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:22.236 17:40:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.236 17:40:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.236 17:40:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.236 17:40:09 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.236 17:40:09 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.236 17:40:09 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.236 17:40:09 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.236 17:40:09 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.236 17:40:09 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.236 17:40:09 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.236 17:40:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.236 17:40:09 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.236 17:40:09 -- scripts/common.sh@345 -- # : 1 00:03:22.236 17:40:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.236 17:40:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.236 17:40:09 -- scripts/common.sh@365 -- # decimal 1 00:03:22.236 17:40:09 -- scripts/common.sh@353 -- # local d=1 00:03:22.236 17:40:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.236 17:40:09 -- scripts/common.sh@355 -- # echo 1 00:03:22.236 17:40:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.236 17:40:09 -- scripts/common.sh@366 -- # decimal 2 00:03:22.236 17:40:09 -- scripts/common.sh@353 -- # local d=2 00:03:22.236 17:40:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.236 17:40:09 -- scripts/common.sh@355 -- # echo 2 00:03:22.236 17:40:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.236 17:40:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.236 17:40:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.236 17:40:09 -- scripts/common.sh@368 -- # return 0 00:03:22.236 17:40:09 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.236 17:40:09 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.236 --rc genhtml_branch_coverage=1 00:03:22.236 --rc genhtml_function_coverage=1 00:03:22.236 --rc genhtml_legend=1 00:03:22.236 --rc geninfo_all_blocks=1 00:03:22.236 --rc geninfo_unexecuted_blocks=1 00:03:22.236 00:03:22.236 ' 00:03:22.236 17:40:09 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.236 --rc genhtml_branch_coverage=1 00:03:22.236 --rc genhtml_function_coverage=1 00:03:22.236 --rc genhtml_legend=1 00:03:22.236 --rc geninfo_all_blocks=1 00:03:22.236 --rc geninfo_unexecuted_blocks=1 00:03:22.236 00:03:22.236 ' 00:03:22.236 17:40:09 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.236 --rc genhtml_branch_coverage=1 00:03:22.236 --rc genhtml_function_coverage=1 00:03:22.236 --rc genhtml_legend=1 00:03:22.236 --rc geninfo_all_blocks=1 00:03:22.236 --rc geninfo_unexecuted_blocks=1 00:03:22.236 00:03:22.236 ' 00:03:22.236 17:40:09 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:22.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.236 --rc genhtml_branch_coverage=1 00:03:22.236 --rc genhtml_function_coverage=1 00:03:22.236 --rc genhtml_legend=1 00:03:22.236 --rc geninfo_all_blocks=1 00:03:22.236 --rc geninfo_unexecuted_blocks=1 00:03:22.236 00:03:22.236 ' 00:03:22.236 17:40:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:22.236 17:40:09 -- nvmf/common.sh@7 -- # uname -s 00:03:22.236 17:40:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.236 17:40:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.236 17:40:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.236 17:40:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.236 17:40:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.236 17:40:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.236 17:40:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.236 17:40:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.236 17:40:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.236 17:40:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.236 17:40:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:22.236 17:40:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:22.236 17:40:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.236 17:40:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.236 17:40:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:22.236 17:40:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.236 17:40:09 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:22.236 17:40:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.236 17:40:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.236 17:40:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.236 17:40:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.236 17:40:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.236 17:40:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.236 17:40:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.236 17:40:09 -- paths/export.sh@5 -- # export PATH 00:03:22.236 17:40:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.236 17:40:09 -- nvmf/common.sh@51 -- # : 0 00:03:22.236 17:40:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.236 17:40:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.236 17:40:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.236 17:40:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.236 17:40:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.236 17:40:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.236 17:40:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.236 17:40:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.236 17:40:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.236 17:40:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.236 17:40:09 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.236 17:40:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.237 17:40:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.237 17:40:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:22.237 17:40:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.237 17:40:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:22.237 17:40:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.237 17:40:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.237 17:40:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.237 17:40:09 -- spdk/autotest.sh@48 -- # udevadm_pid=2749326 00:03:22.237 17:40:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.237 17:40:09 -- pm/common@17 -- # local monitor 00:03:22.237 17:40:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.237 17:40:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.237 17:40:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.237 17:40:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.237 17:40:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.237 17:40:09 -- pm/common@25 -- # sleep 1 00:03:22.237 17:40:09 -- pm/common@21 -- # date +%s 00:03:22.237 17:40:09 -- pm/common@21 -- # date +%s 00:03:22.237 17:40:09 -- pm/common@21 -- # date +%s 00:03:22.237 17:40:09 -- pm/common@21 -- # date +%s 00:03:22.237 17:40:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733503209 00:03:22.237 17:40:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733503209 00:03:22.237 17:40:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733503209 00:03:22.237 17:40:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733503209 00:03:22.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733503209_collect-cpu-load.pm.log 00:03:22.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733503209_collect-vmstat.pm.log 00:03:22.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733503209_collect-cpu-temp.pm.log 00:03:22.237 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733503209_collect-bmc-pm.bmc.pm.log 00:03:23.175 17:40:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.175 17:40:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.175 17:40:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.175 17:40:10 -- common/autotest_common.sh@10 -- # set +x 00:03:23.175 17:40:10 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.175 17:40:10 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:23.175 17:40:10 -- common/autotest_common.sh@10 -- # set +x 00:03:23.175 17:40:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:23.175 17:40:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.175 17:40:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.175 17:40:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:23.175 17:40:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:23.175 17:40:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.175 17:40:10 -- common/autotest_common.sh@1457 -- # uname 00:03:23.175 17:40:10 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:23.175 17:40:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.175 17:40:10 -- common/autotest_common.sh@1477 -- # uname 00:03:23.175 17:40:10 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:23.175 17:40:10 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.175 17:40:10 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.434 lcov: LCOV version 1.15 00:03:23.434 17:40:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:33.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:33.412 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:43.392 17:40:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:43.392 17:40:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.392 17:40:31 -- common/autotest_common.sh@10 -- # set +x 00:03:43.392 17:40:31 -- spdk/autotest.sh@78 -- # rm -f 00:03:43.392 17:40:31 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.926 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:45.926 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:46.184 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:46.184 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:46.184 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:46.184 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:46.184 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:46.184 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:46.184 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:46.185 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:46.185 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:46.443 17:40:34 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:46.443 17:40:34 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:46.443 17:40:34 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:46.443 17:40:34 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:46.443 17:40:34 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:46.443 17:40:34 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:46.443 17:40:34 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:46.443 17:40:34 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:46.443 17:40:34 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:46.443 17:40:34 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:46.443 17:40:34 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:46.443 17:40:34 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.443 17:40:34 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:46.443 17:40:34 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:46.443 17:40:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:46.443 17:40:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:46.443 17:40:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:46.443 17:40:34 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:46.443 17:40:34 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:46.443 No valid GPT data, bailing 00:03:46.443 17:40:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.443 17:40:34 -- scripts/common.sh@394 -- # pt= 00:03:46.443 17:40:34 -- scripts/common.sh@395 -- # return 1 00:03:46.443 17:40:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:46.443 1+0 records in 00:03:46.443 1+0 records out 00:03:46.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00176618 s, 594 MB/s 00:03:46.443 17:40:34 -- spdk/autotest.sh@105 -- # sync 00:03:46.444 17:40:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:46.444 17:40:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:46.444 17:40:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:51.840 17:40:39 -- spdk/autotest.sh@111 -- # uname -s 00:03:51.840 17:40:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:51.840 17:40:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:51.840 17:40:39 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:53.746 Hugepages 00:03:53.746 node hugesize free / total 00:03:53.746 node0 1048576kB 0 / 0 00:03:53.746 node0 2048kB 0 / 0 00:03:53.746 node1 1048576kB 0 / 0 00:03:53.746 node1 2048kB 0 / 0 00:03:53.746 00:03:53.746 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:53.746 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:53.746 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:54.005 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:54.005 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:54.005 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:54.005 17:40:41 -- spdk/autotest.sh@117 -- # uname -s 00:03:54.005 17:40:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:54.005 17:40:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:54.005 17:40:41 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:56.539 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:56.539 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:58.480 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:58.739 17:40:46 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:59.675 17:40:47 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:59.675 17:40:47 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:59.675 17:40:47 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.675 17:40:47 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:59.675 17:40:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:59.675 17:40:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:59.675 17:40:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.675 17:40:47 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:59.675 17:40:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:59.675 17:40:47 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:59.675 17:40:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:59.675 17:40:47 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.213 Waiting for block devices as requested 00:04:02.213 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:02.213 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:02.213 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:02.473 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:02.473 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:02.473 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:02.473 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:02.732 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:02.732 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:02.993 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:02.993 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:02.993 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:02.993 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:03.253 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:03.253 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:03.253 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:03.253 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:03.513 17:40:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.513 17:40:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:03.513 17:40:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:03.513 17:40:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:03.513 17:40:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:03.513 17:40:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.513 17:40:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.773 17:40:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:03.773 17:40:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.773 17:40:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.773 17:40:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:03.773 17:40:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.773 17:40:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.773 17:40:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.773 17:40:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.773 17:40:51 -- common/autotest_common.sh@1543 -- # continue 00:04:03.773 17:40:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:03.773 17:40:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.773 17:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:03.773 17:40:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:03.773 17:40:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.773 17:40:51 -- common/autotest_common.sh@10 -- # set +x 00:04:03.773 17:40:51 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.313 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:06.313 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:06.572 17:40:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.572 17:40:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.572 17:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:06.832 17:40:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.832 17:40:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:06.832 17:40:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.832 17:40:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:06.832 17:40:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:06.832 17:40:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:06.832 17:40:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.832 17:40:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:06.832 17:40:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.832 17:40:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.832 17:40:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.832 17:40:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.832 17:40:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.832 17:40:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:06.832 17:40:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:06.833 17:40:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:06.833 17:40:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:06.833 17:40:54 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:06.833 17:40:54 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:06.833 17:40:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:06.833 17:40:54 -- common/autotest_common.sh@1572 -- # return 0 00:04:06.833 17:40:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:06.833 17:40:54 -- common/autotest_common.sh@1580 -- # return 0 00:04:06.833 17:40:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:06.833 17:40:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:06.833 17:40:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.833 17:40:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.833 17:40:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:06.833 17:40:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.833 17:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:06.833 17:40:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:06.833 17:40:54 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:06.833 17:40:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.833 17:40:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.833 17:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:06.833 ************************************ 00:04:06.833 START TEST env 00:04:06.833 ************************************ 00:04:06.833 17:40:54 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:06.833 * Looking for test storage... 00:04:06.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:06.833 17:40:54 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:06.833 17:40:54 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.833 17:40:54 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:06.833 17:40:54 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:06.833 17:40:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.833 17:40:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.833 17:40:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.833 17:40:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.833 17:40:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.833 17:40:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.833 17:40:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.833 17:40:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.833 17:40:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.833 17:40:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.833 17:40:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.833 17:40:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:06.833 17:40:54 env -- scripts/common.sh@345 -- # : 1 00:04:06.833 17:40:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.833 17:40:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.833 17:40:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:06.833 17:40:54 env -- scripts/common.sh@353 -- # local d=1 00:04:06.833 17:40:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.833 17:40:54 env -- scripts/common.sh@355 -- # echo 1 00:04:06.833 17:40:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.833 17:40:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:06.833 17:40:54 env -- scripts/common.sh@353 -- # local d=2 00:04:07.093 17:40:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.093 17:40:54 env -- scripts/common.sh@355 -- # echo 2 00:04:07.093 17:40:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.093 17:40:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.093 17:40:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.093 17:40:54 env -- scripts/common.sh@368 -- # return 0 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 17:40:54 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.093 17:40:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.093 17:40:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.093 ************************************ 00:04:07.093 START TEST env_memory 00:04:07.093 ************************************ 00:04:07.093 17:40:54 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.093 00:04:07.093 00:04:07.093 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.093 http://cunit.sourceforge.net/ 00:04:07.093 00:04:07.093 00:04:07.093 Suite: memory 00:04:07.093 Test: alloc and free memory map ...[2024-12-06 17:40:54.720536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.093 passed 00:04:07.093 Test: mem map translation ...[2024-12-06 17:40:54.746126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.093 [2024-12-06 17:40:54.746153] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.093 [2024-12-06 17:40:54.746200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.093 [2024-12-06 17:40:54.746213] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.093 passed 00:04:07.093 Test: mem map registration ...[2024-12-06 17:40:54.801356] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:07.093 [2024-12-06 17:40:54.801389] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:07.093 passed 00:04:07.093 Test: mem map adjacent registrations ...passed 00:04:07.093 00:04:07.093 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.093 suites 1 1 n/a 0 0 00:04:07.093 tests 4 4 4 0 0 00:04:07.093 asserts 152 152 152 0 n/a 00:04:07.093 00:04:07.093 Elapsed time = 0.182 seconds 00:04:07.093 00:04:07.093 real 0m0.190s 00:04:07.093 user 0m0.183s 00:04:07.093 sys 0m0.006s 00:04:07.094 17:40:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.094 17:40:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.094 ************************************ 00:04:07.094 END TEST env_memory 00:04:07.094 ************************************ 00:04:07.094 17:40:54 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.094 17:40:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.094 17:40:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.094 17:40:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.364 ************************************ 00:04:07.364 START TEST env_vtophys 00:04:07.364 ************************************ 00:04:07.364 17:40:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.364 EAL: lib.eal log level changed from notice to debug 00:04:07.364 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.364 EAL: Detected lcore 1 as core 1 on socket 0 00:04:07.364 EAL: Detected lcore 2 as core 2 on socket 0 00:04:07.364 EAL: Detected lcore 3 as core 3 on socket 0 00:04:07.364 EAL: Detected lcore 4 as core 4 on socket 0 00:04:07.364 EAL: Detected lcore 5 as core 5 on socket 0 00:04:07.364 EAL: Detected lcore 6 as core 6 on socket 0 00:04:07.364 EAL: Detected lcore 7 as core 7 on socket 0 00:04:07.364 EAL: Detected lcore 8 as core 8 on socket 0 00:04:07.364 EAL: Detected lcore 9 as core 9 on socket 0 00:04:07.364 EAL: Detected lcore 10 as core 10 on socket 0 00:04:07.364 EAL: Detected lcore 11 as core 11 on socket 0 00:04:07.364 EAL: Detected lcore 12 as core 12 on socket 0 00:04:07.364 EAL: Detected lcore 13 as core 13 on socket 0 00:04:07.364 EAL: Detected lcore 14 as core 14 on socket 0 00:04:07.364 EAL: Detected lcore 15 as core 15 on socket 0 00:04:07.364 EAL: Detected lcore 16 as core 16 on socket 0 00:04:07.364 EAL: Detected lcore 17 as core 17 on socket 0 00:04:07.364 EAL: Detected lcore 18 as core 18 on socket 0 00:04:07.364 EAL: Detected lcore 19 as core 19 on socket 0 00:04:07.364 EAL: Detected lcore 20 as core 20 on socket 0 00:04:07.364 EAL: Detected lcore 21 as core 21 on socket 0 00:04:07.364 EAL: Detected lcore 22 as core 22 on socket 0 00:04:07.364 EAL: Detected lcore 23 as core 23 on socket 0 00:04:07.364 EAL: Detected lcore 24 as core 24 on socket 0 00:04:07.364 EAL: Detected lcore 25 as core 25 on socket 0 00:04:07.364 EAL: Detected lcore 26 as core 26 on socket 0 00:04:07.364 EAL: Detected lcore 27 as core 27 on socket 0 00:04:07.364 EAL: Detected lcore 28 as core 28 on socket 0 00:04:07.364 EAL: Detected lcore 29 as core 29 on socket 0 00:04:07.364 EAL: Detected lcore 30 as core 30 on socket 0 00:04:07.364 EAL: Detected lcore 31 as core 31 on socket 0 00:04:07.364 EAL: Detected lcore 32 as core 32 on socket 0 00:04:07.364 EAL: Detected lcore 33 as core 33 on socket 0 00:04:07.364 EAL: Detected lcore 34 as core 34 on socket 0 00:04:07.364 EAL: Detected lcore 35 as core 35 on socket 0 00:04:07.364 EAL: Detected lcore 36 as core 0 on socket 1 00:04:07.364 EAL: Detected lcore 37 as core 1 on socket 1 00:04:07.364 EAL: Detected lcore 38 as core 2 on socket 1 00:04:07.364 EAL: Detected lcore 39 as core 3 on socket 1 00:04:07.364 EAL: Detected lcore 40 as core 4 on socket 1 00:04:07.364 EAL: Detected lcore 41 as core 5 on socket 1 00:04:07.364 EAL: Detected lcore 42 as core 6 on socket 1 00:04:07.364 EAL: Detected lcore 43 as core 7 on socket 1 00:04:07.364 EAL: Detected lcore 44 as core 8 on socket 1 00:04:07.364 EAL: Detected lcore 45 as core 9 on socket 1 00:04:07.364 EAL: Detected lcore 46 as core 10 on socket 1 00:04:07.364 EAL: Detected lcore 47 as core 11 on socket 1 00:04:07.364 EAL: Detected lcore 48 as core 12 on socket 1 00:04:07.364 EAL: Detected lcore 49 as core 13 on socket 1 00:04:07.364 EAL: Detected lcore 50 as core 14 on socket 1 00:04:07.364 EAL: Detected lcore 51 as core 15 on socket 1 00:04:07.364 EAL: Detected lcore 52 as core 16 on socket 1 00:04:07.364 EAL: Detected lcore 53 as core 17 on socket 1 00:04:07.364 EAL: Detected lcore 54 as core 18 on socket 1 00:04:07.364 EAL: Detected lcore 55 as core 19 on socket 1 00:04:07.364 EAL: Detected lcore 56 as core 20 on socket 1 00:04:07.364 EAL: Detected lcore 57 as core 21 on socket 1 00:04:07.364 EAL: Detected lcore 58 as core 22 on socket 1 00:04:07.364 EAL: Detected lcore 59 as core 23 on socket 1 00:04:07.364 EAL: Detected lcore 60 as core 24 on socket 1 00:04:07.364 EAL: Detected lcore 61 as core 25 on socket 1 00:04:07.364 EAL: Detected lcore 62 as core 26 on socket 1 00:04:07.364 EAL: Detected lcore 63 as core 27 on socket 1 00:04:07.364 EAL: Detected lcore 64 as core 28 on socket 1 00:04:07.364 EAL: Detected lcore 65 as core 29 on socket 1 00:04:07.364 EAL: Detected lcore 66 as core 30 on socket 1 00:04:07.364 EAL: Detected lcore 67 as core 31 on socket 1 00:04:07.364 EAL: Detected lcore 68 as core 32 on socket 1 00:04:07.364 EAL: Detected lcore 69 as core 33 on socket 1 00:04:07.364 EAL: Detected lcore 70 as core 34 on socket 1 00:04:07.364 EAL: Detected lcore 71 as core 35 on socket 1 00:04:07.364 EAL: Detected lcore 72 as core 0 on socket 0 00:04:07.364 EAL: Detected lcore 73 as core 1 on socket 0 00:04:07.364 EAL: Detected lcore 74 as core 2 on socket 0 00:04:07.364 EAL: Detected lcore 75 as core 3 on socket 0 00:04:07.364 EAL: Detected lcore 76 as core 4 on socket 0 00:04:07.364 EAL: Detected lcore 77 as core 5 on socket 0 00:04:07.364 EAL: Detected lcore 78 as core 6 on socket 0 00:04:07.364 EAL: Detected lcore 79 as core 7 on socket 0 00:04:07.364 EAL: Detected lcore 80 as core 8 on socket 0 00:04:07.364 EAL: Detected lcore 81 as core 9 on socket 0 00:04:07.364 EAL: Detected lcore 82 as core 10 on socket 0 00:04:07.364 EAL: Detected lcore 83 as core 11 on socket 0 00:04:07.364 EAL: Detected lcore 84 as core 12 on socket 0 00:04:07.364 EAL: Detected lcore 85 as core 13 on socket 0 00:04:07.364 EAL: Detected lcore 86 as core 14 on socket 0 00:04:07.364 EAL: Detected lcore 87 as core 15 on socket 0 00:04:07.364 EAL: Detected lcore 88 as core 16 on socket 0 00:04:07.364 EAL: Detected lcore 89 as core 17 on socket 0 00:04:07.364 EAL: Detected lcore 90 as core 18 on socket 0 00:04:07.364 EAL: Detected lcore 91 as core 19 on socket 0 00:04:07.364 EAL: Detected lcore 92 as core 20 on socket 0 00:04:07.364 EAL: Detected lcore 93 as core 21 on socket 0 00:04:07.364 EAL: Detected lcore 94 as core 22 on socket 0 00:04:07.364 EAL: Detected lcore 95 as core 23 on socket 0 00:04:07.364 EAL: Detected lcore 96 as core 24 on socket 0 00:04:07.364 EAL: Detected lcore 97 as core 25 on socket 0 00:04:07.364 EAL: Detected lcore 98 as core 26 on socket 0 00:04:07.364 EAL: Detected lcore 99 as core 27 on socket 0 00:04:07.364 EAL: Detected lcore 100 as core 28 on socket 0 00:04:07.364 EAL: Detected lcore 101 as core 29 on socket 0 00:04:07.364 EAL: Detected lcore 102 as core 30 on socket 0 00:04:07.364 EAL: Detected lcore 103 as core 31 on socket 0 00:04:07.364 EAL: Detected lcore 104 as core 32 on socket 0 00:04:07.364 EAL: Detected lcore 105 as core 33 on socket 0 00:04:07.364 EAL: Detected lcore 106 as core 34 on socket 0 00:04:07.364 EAL: Detected lcore 107 as core 35 on socket 0 00:04:07.364 EAL: Detected lcore 108 as core 0 on socket 1 00:04:07.364 EAL: Detected lcore 109 as core 1 on socket 1 00:04:07.364 EAL: Detected lcore 110 as core 2 on socket 1 00:04:07.364 EAL: Detected lcore 111 as core 3 on socket 1 00:04:07.364 EAL: Detected lcore 112 as core 4 on socket 1 00:04:07.364 EAL: Detected lcore 113 as core 5 on socket 1 00:04:07.364 EAL: Detected lcore 114 as core 6 on socket 1 00:04:07.364 EAL: Detected lcore 115 as core 7 on socket 1 00:04:07.364 EAL: Detected lcore 116 as core 8 on socket 1 00:04:07.364 EAL: Detected lcore 117 as core 9 on socket 1 00:04:07.364 EAL: Detected lcore 118 as core 10 on socket 1 00:04:07.364 EAL: Detected lcore 119 as core 11 on socket 1 00:04:07.364 EAL: Detected lcore 120 as core 12 on socket 1 00:04:07.364 EAL: Detected lcore 121 as core 13 on socket 1 00:04:07.364 EAL: Detected lcore 122 as core 14 on socket 1 00:04:07.364 EAL: Detected lcore 123 as core 15 on socket 1 00:04:07.364 EAL: Detected lcore 124 as core 16 on socket 1 00:04:07.364 EAL: Detected lcore 125 as core 17 on socket 1 00:04:07.364 EAL: Detected lcore 126 as core 18 on socket 1 00:04:07.364 EAL: Detected lcore 127 as core 19 on socket 1 00:04:07.364 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:07.364 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:07.364 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:07.364 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:07.364 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:07.364 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:07.364 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:07.364 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:07.364 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:07.364 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:07.364 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:07.364 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:07.364 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:07.364 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:07.364 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:07.364 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:07.364 EAL: Maximum logical cores by configuration: 128 00:04:07.364 EAL: Detected CPU lcores: 128 00:04:07.364 EAL: Detected NUMA nodes: 2 00:04:07.364 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.364 EAL: Detected shared linkage of DPDK 00:04:07.364 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.364 EAL: Bus pci wants IOVA as 'DC' 00:04:07.364 EAL: Buses did not request a specific IOVA mode. 00:04:07.364 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:07.364 EAL: Selected IOVA mode 'VA' 00:04:07.364 EAL: Probing VFIO support... 00:04:07.364 EAL: IOMMU type 1 (Type 1) is supported 00:04:07.364 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:07.364 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:07.364 EAL: VFIO support initialized 00:04:07.364 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.364 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.364 EAL: Setting up physically contiguous memory... 00:04:07.364 EAL: Setting maximum number of open files to 524288 00:04:07.364 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.364 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:07.364 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.364 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:07.364 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.364 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:07.364 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.364 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.364 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:07.364 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:07.364 EAL: Hugepages will be freed exactly as allocated. 00:04:07.364 EAL: No shared files mode enabled, IPC is disabled 00:04:07.364 EAL: No shared files mode enabled, IPC is disabled 00:04:07.364 EAL: TSC frequency is ~2400000 KHz 00:04:07.364 EAL: Main lcore 0 is ready (tid=7f78ecee8a00;cpuset=[0]) 00:04:07.364 EAL: Trying to obtain current memory policy. 00:04:07.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.364 EAL: Restoring previous memory policy: 0 00:04:07.364 EAL: request: mp_malloc_sync 00:04:07.364 EAL: No shared files mode enabled, IPC is disabled 00:04:07.364 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.364 EAL: No shared files mode enabled, IPC is disabled 00:04:07.364 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.364 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.364 00:04:07.364 00:04:07.364 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.364 http://cunit.sourceforge.net/ 00:04:07.364 00:04:07.364 00:04:07.365 Suite: components_suite 00:04:07.365 Test: vtophys_malloc_test ...passed 00:04:07.365 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.365 EAL: Trying to obtain current memory policy. 00:04:07.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.365 EAL: Restoring previous memory policy: 4 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.365 EAL: request: mp_malloc_sync 00:04:07.365 EAL: No shared files mode enabled, IPC is disabled 00:04:07.365 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.365 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.625 EAL: request: mp_malloc_sync 00:04:07.625 EAL: No shared files mode enabled, IPC is disabled 00:04:07.625 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.625 EAL: Trying to obtain current memory policy. 00:04:07.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.625 EAL: Restoring previous memory policy: 4 00:04:07.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.625 EAL: request: mp_malloc_sync 00:04:07.625 EAL: No shared files mode enabled, IPC is disabled 00:04:07.625 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.625 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.625 EAL: request: mp_malloc_sync 00:04:07.625 EAL: No shared files mode enabled, IPC is disabled 00:04:07.625 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.625 EAL: Trying to obtain current memory policy. 00:04:07.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.882 EAL: Restoring previous memory policy: 4 00:04:07.882 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.882 EAL: request: mp_malloc_sync 00:04:07.882 EAL: No shared files mode enabled, IPC is disabled 00:04:07.882 EAL: Heap on socket 0 was expanded by 1026MB 00:04:07.882 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.141 EAL: request: mp_malloc_sync 00:04:08.141 EAL: No shared files mode enabled, IPC is disabled 00:04:08.141 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.141 passed 00:04:08.141 00:04:08.141 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.141 suites 1 1 n/a 0 0 00:04:08.141 tests 2 2 2 0 0 00:04:08.141 asserts 497 497 497 0 n/a 00:04:08.141 00:04:08.141 Elapsed time = 0.688 seconds 00:04:08.141 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.141 EAL: request: mp_malloc_sync 00:04:08.141 EAL: No shared files mode enabled, IPC is disabled 00:04:08.141 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.141 EAL: No shared files mode enabled, IPC is disabled 00:04:08.141 EAL: No shared files mode enabled, IPC is disabled 00:04:08.141 EAL: No shared files mode enabled, IPC is disabled 00:04:08.141 00:04:08.141 real 0m0.821s 00:04:08.141 user 0m0.430s 00:04:08.141 sys 0m0.358s 00:04:08.141 17:40:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.141 17:40:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.141 ************************************ 00:04:08.141 END TEST env_vtophys 00:04:08.141 ************************************ 00:04:08.141 17:40:55 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.141 17:40:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.141 17:40:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.141 17:40:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.141 ************************************ 00:04:08.141 START TEST env_pci 00:04:08.141 ************************************ 00:04:08.141 17:40:55 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.142 00:04:08.142 00:04:08.142 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.142 http://cunit.sourceforge.net/ 00:04:08.142 00:04:08.142 00:04:08.142 Suite: pci 00:04:08.142 Test: pci_hook ...[2024-12-06 17:40:55.803099] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2766660 has claimed it 00:04:08.142 EAL: Cannot find device (10000:00:01.0) 00:04:08.142 EAL: Failed to attach device on primary process 00:04:08.142 passed 00:04:08.142 00:04:08.142 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.142 suites 1 1 n/a 0 0 00:04:08.142 tests 1 1 1 0 0 00:04:08.142 asserts 25 25 25 0 n/a 00:04:08.142 00:04:08.142 Elapsed time = 0.024 seconds 00:04:08.142 00:04:08.142 real 0m0.035s 00:04:08.142 user 0m0.011s 00:04:08.142 sys 0m0.023s 00:04:08.142 17:40:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.142 17:40:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.142 ************************************ 00:04:08.142 END TEST env_pci 00:04:08.142 ************************************ 00:04:08.142 17:40:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.142 17:40:55 env -- env/env.sh@15 -- # uname 00:04:08.142 17:40:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.142 17:40:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.142 17:40:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.142 17:40:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:08.142 17:40:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.142 17:40:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.142 ************************************ 00:04:08.142 START TEST env_dpdk_post_init 00:04:08.142 ************************************ 00:04:08.142 17:40:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.142 EAL: Detected CPU lcores: 128 00:04:08.142 EAL: Detected NUMA nodes: 2 00:04:08.142 EAL: Detected shared linkage of DPDK 00:04:08.142 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.142 EAL: Selected IOVA mode 'VA' 00:04:08.142 EAL: VFIO support initialized 00:04:08.142 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.401 EAL: Using IOMMU type 1 (Type 1) 00:04:08.401 EAL: Ignore mapping IO port bar(1) 00:04:08.659 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:08.659 EAL: Ignore mapping IO port bar(1) 00:04:08.659 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:08.918 EAL: Ignore mapping IO port bar(1) 00:04:08.918 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:09.177 EAL: Ignore mapping IO port bar(1) 00:04:09.177 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:09.434 EAL: Ignore mapping IO port bar(1) 00:04:09.434 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:09.434 EAL: Ignore mapping IO port bar(1) 00:04:09.693 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:09.693 EAL: Ignore mapping IO port bar(1) 00:04:09.952 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:09.952 EAL: Ignore mapping IO port bar(1) 00:04:10.211 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:10.211 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:10.470 EAL: Ignore mapping IO port bar(1) 00:04:10.470 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:10.730 EAL: Ignore mapping IO port bar(1) 00:04:10.730 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:10.988 EAL: Ignore mapping IO port bar(1) 00:04:10.988 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:11.247 EAL: Ignore mapping IO port bar(1) 00:04:11.247 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:11.247 EAL: Ignore mapping IO port bar(1) 00:04:11.505 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:11.505 EAL: Ignore mapping IO port bar(1) 00:04:11.763 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:11.763 EAL: Ignore mapping IO port bar(1) 00:04:12.022 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:12.022 EAL: Ignore mapping IO port bar(1) 00:04:12.022 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:12.022 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:12.022 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:12.280 Starting DPDK initialization... 00:04:12.280 Starting SPDK post initialization... 00:04:12.280 SPDK NVMe probe 00:04:12.280 Attaching to 0000:65:00.0 00:04:12.280 Attached to 0000:65:00.0 00:04:12.280 Cleaning up... 00:04:14.185 00:04:14.185 real 0m5.731s 00:04:14.185 user 0m0.104s 00:04:14.185 sys 0m0.181s 00:04:14.185 17:41:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.185 17:41:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.185 ************************************ 00:04:14.185 END TEST env_dpdk_post_init 00:04:14.185 ************************************ 00:04:14.185 17:41:01 env -- env/env.sh@26 -- # uname 00:04:14.185 17:41:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.185 17:41:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.185 17:41:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.185 17:41:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.185 17:41:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.185 ************************************ 00:04:14.185 START TEST env_mem_callbacks 00:04:14.185 ************************************ 00:04:14.185 17:41:01 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.185 EAL: Detected CPU lcores: 128 00:04:14.185 EAL: Detected NUMA nodes: 2 00:04:14.185 EAL: Detected shared linkage of DPDK 00:04:14.185 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.185 EAL: Selected IOVA mode 'VA' 00:04:14.185 EAL: VFIO support initialized 00:04:14.185 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.185 00:04:14.185 00:04:14.185 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.185 http://cunit.sourceforge.net/ 00:04:14.185 00:04:14.185 00:04:14.185 Suite: memory 00:04:14.185 Test: test ... 00:04:14.185 register 0x200000200000 2097152 00:04:14.185 malloc 3145728 00:04:14.185 register 0x200000400000 4194304 00:04:14.185 buf 0x200000500000 len 3145728 PASSED 00:04:14.185 malloc 64 00:04:14.185 buf 0x2000004fff40 len 64 PASSED 00:04:14.185 malloc 4194304 00:04:14.185 register 0x200000800000 6291456 00:04:14.185 buf 0x200000a00000 len 4194304 PASSED 00:04:14.185 free 0x200000500000 3145728 00:04:14.185 free 0x2000004fff40 64 00:04:14.185 unregister 0x200000400000 4194304 PASSED 00:04:14.185 free 0x200000a00000 4194304 00:04:14.185 unregister 0x200000800000 6291456 PASSED 00:04:14.185 malloc 8388608 00:04:14.185 register 0x200000400000 10485760 00:04:14.185 buf 0x200000600000 len 8388608 PASSED 00:04:14.185 free 0x200000600000 8388608 00:04:14.185 unregister 0x200000400000 10485760 PASSED 00:04:14.185 passed 00:04:14.185 00:04:14.185 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.185 suites 1 1 n/a 0 0 00:04:14.185 tests 1 1 1 0 0 00:04:14.185 asserts 15 15 15 0 n/a 00:04:14.185 00:04:14.185 Elapsed time = 0.008 seconds 00:04:14.185 00:04:14.185 real 0m0.054s 00:04:14.185 user 0m0.013s 00:04:14.185 sys 0m0.040s 00:04:14.185 17:41:01 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.185 17:41:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.185 ************************************ 00:04:14.185 END TEST env_mem_callbacks 00:04:14.185 ************************************ 00:04:14.185 00:04:14.185 real 0m7.225s 00:04:14.185 user 0m0.894s 00:04:14.185 sys 0m0.874s 00:04:14.185 17:41:01 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.185 17:41:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.185 ************************************ 00:04:14.185 END TEST env 00:04:14.186 ************************************ 00:04:14.186 17:41:01 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.186 17:41:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.186 17:41:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.186 17:41:01 -- common/autotest_common.sh@10 -- # set +x 00:04:14.186 ************************************ 00:04:14.186 START TEST rpc 00:04:14.186 ************************************ 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.186 * Looking for test storage... 00:04:14.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.186 17:41:01 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.186 17:41:01 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.186 17:41:01 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.186 17:41:01 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.186 17:41:01 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.186 17:41:01 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.186 17:41:01 rpc -- scripts/common.sh@345 -- # : 1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.186 17:41:01 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.186 17:41:01 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.186 17:41:01 rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.186 17:41:01 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.186 17:41:01 rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.186 17:41:01 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.186 17:41:01 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.186 17:41:01 rpc -- scripts/common.sh@368 -- # return 0 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.186 --rc genhtml_branch_coverage=1 00:04:14.186 --rc genhtml_function_coverage=1 00:04:14.186 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.186 --rc genhtml_branch_coverage=1 00:04:14.186 --rc genhtml_function_coverage=1 00:04:14.186 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.186 --rc genhtml_branch_coverage=1 00:04:14.186 --rc genhtml_function_coverage=1 00:04:14.186 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.186 --rc genhtml_branch_coverage=1 00:04:14.186 --rc genhtml_function_coverage=1 00:04:14.186 --rc genhtml_legend=1 00:04:14.186 --rc geninfo_all_blocks=1 00:04:14.186 --rc geninfo_unexecuted_blocks=1 00:04:14.186 00:04:14.186 ' 00:04:14.186 17:41:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2768119 00:04:14.186 17:41:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.186 17:41:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2768119 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@835 -- # '[' -z 2768119 ']' 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.186 17:41:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.186 17:41:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:14.186 [2024-12-06 17:41:01.984358] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:14.186 [2024-12-06 17:41:01.984431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768119 ] 00:04:14.446 [2024-12-06 17:41:02.070845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.446 [2024-12-06 17:41:02.123342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.446 [2024-12-06 17:41:02.123397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2768119' to capture a snapshot of events at runtime. 00:04:14.446 [2024-12-06 17:41:02.123406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.446 [2024-12-06 17:41:02.123413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.446 [2024-12-06 17:41:02.123420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2768119 for offline analysis/debug. 00:04:14.446 [2024-12-06 17:41:02.124213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.014 17:41:02 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.014 17:41:02 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:15.014 17:41:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.014 17:41:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.014 17:41:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.014 17:41:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.014 17:41:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.014 17:41:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.014 17:41:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.014 ************************************ 00:04:15.014 START TEST rpc_integrity 00:04:15.014 ************************************ 00:04:15.014 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.014 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.014 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.014 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.014 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.014 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.014 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.273 { 00:04:15.273 "name": "Malloc0", 00:04:15.273 "aliases": [ 00:04:15.273 "7eb07065-275c-4afc-b43e-32e793b1bf4f" 00:04:15.273 ], 00:04:15.273 "product_name": "Malloc disk", 00:04:15.273 "block_size": 512, 00:04:15.273 "num_blocks": 16384, 00:04:15.273 "uuid": "7eb07065-275c-4afc-b43e-32e793b1bf4f", 00:04:15.273 "assigned_rate_limits": { 00:04:15.273 "rw_ios_per_sec": 0, 00:04:15.273 "rw_mbytes_per_sec": 0, 00:04:15.273 "r_mbytes_per_sec": 0, 00:04:15.273 "w_mbytes_per_sec": 0 00:04:15.273 }, 00:04:15.273 "claimed": false, 00:04:15.273 "zoned": false, 00:04:15.273 "supported_io_types": { 00:04:15.273 "read": true, 00:04:15.273 "write": true, 00:04:15.273 "unmap": true, 00:04:15.273 "flush": true, 00:04:15.273 "reset": true, 00:04:15.273 "nvme_admin": false, 00:04:15.273 "nvme_io": false, 00:04:15.273 "nvme_io_md": false, 00:04:15.273 "write_zeroes": true, 00:04:15.273 "zcopy": true, 00:04:15.273 "get_zone_info": false, 00:04:15.273 "zone_management": false, 00:04:15.273 "zone_append": false, 00:04:15.273 "compare": false, 00:04:15.273 "compare_and_write": false, 00:04:15.273 "abort": true, 00:04:15.273 "seek_hole": false, 00:04:15.273 "seek_data": false, 00:04:15.273 "copy": true, 00:04:15.273 "nvme_iov_md": false 00:04:15.273 }, 00:04:15.273 "memory_domains": [ 00:04:15.273 { 00:04:15.273 "dma_device_id": "system", 00:04:15.273 "dma_device_type": 1 00:04:15.273 }, 00:04:15.273 { 00:04:15.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.273 "dma_device_type": 2 00:04:15.273 } 00:04:15.273 ], 00:04:15.273 "driver_specific": {} 00:04:15.273 } 00:04:15.273 ]' 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.273 [2024-12-06 17:41:02.912436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.273 [2024-12-06 17:41:02.912481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.273 [2024-12-06 17:41:02.912498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff6840 00:04:15.273 [2024-12-06 17:41:02.912506] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.273 [2024-12-06 17:41:02.914135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.273 [2024-12-06 17:41:02.914171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.273 Passthru0 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.273 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.273 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.273 { 00:04:15.273 "name": "Malloc0", 00:04:15.273 "aliases": [ 00:04:15.273 "7eb07065-275c-4afc-b43e-32e793b1bf4f" 00:04:15.273 ], 00:04:15.273 "product_name": "Malloc disk", 00:04:15.273 "block_size": 512, 00:04:15.273 "num_blocks": 16384, 00:04:15.273 "uuid": "7eb07065-275c-4afc-b43e-32e793b1bf4f", 00:04:15.273 "assigned_rate_limits": { 00:04:15.274 "rw_ios_per_sec": 0, 00:04:15.274 "rw_mbytes_per_sec": 0, 00:04:15.274 "r_mbytes_per_sec": 0, 00:04:15.274 "w_mbytes_per_sec": 0 00:04:15.274 }, 00:04:15.274 "claimed": true, 00:04:15.274 "claim_type": "exclusive_write", 00:04:15.274 "zoned": false, 00:04:15.274 "supported_io_types": { 00:04:15.274 "read": true, 00:04:15.274 "write": true, 00:04:15.274 "unmap": true, 00:04:15.274 "flush": true, 00:04:15.274 "reset": true, 00:04:15.274 "nvme_admin": false, 00:04:15.274 "nvme_io": false, 00:04:15.274 "nvme_io_md": false, 00:04:15.274 "write_zeroes": true, 00:04:15.274 "zcopy": true, 00:04:15.274 "get_zone_info": false, 00:04:15.274 "zone_management": false, 00:04:15.274 "zone_append": false, 00:04:15.274 "compare": false, 00:04:15.274 "compare_and_write": false, 00:04:15.274 "abort": true, 00:04:15.274 "seek_hole": false, 00:04:15.274 "seek_data": false, 00:04:15.274 "copy": true, 00:04:15.274 "nvme_iov_md": false 00:04:15.274 }, 00:04:15.274 "memory_domains": [ 00:04:15.274 { 00:04:15.274 "dma_device_id": "system", 00:04:15.274 "dma_device_type": 1 00:04:15.274 }, 00:04:15.274 { 00:04:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.274 "dma_device_type": 2 00:04:15.274 } 00:04:15.274 ], 00:04:15.274 "driver_specific": {} 00:04:15.274 }, 00:04:15.274 { 00:04:15.274 "name": "Passthru0", 00:04:15.274 "aliases": [ 00:04:15.274 "ca1e4fb0-383c-564d-828b-e254861df1a9" 00:04:15.274 ], 00:04:15.274 "product_name": "passthru", 00:04:15.274 "block_size": 512, 00:04:15.274 "num_blocks": 16384, 00:04:15.274 "uuid": "ca1e4fb0-383c-564d-828b-e254861df1a9", 00:04:15.274 "assigned_rate_limits": { 00:04:15.274 "rw_ios_per_sec": 0, 00:04:15.274 "rw_mbytes_per_sec": 0, 00:04:15.274 "r_mbytes_per_sec": 0, 00:04:15.274 "w_mbytes_per_sec": 0 00:04:15.274 }, 00:04:15.274 "claimed": false, 00:04:15.274 "zoned": false, 00:04:15.274 "supported_io_types": { 00:04:15.274 "read": true, 00:04:15.274 "write": true, 00:04:15.274 "unmap": true, 00:04:15.274 "flush": true, 00:04:15.274 "reset": true, 00:04:15.274 "nvme_admin": false, 00:04:15.274 "nvme_io": false, 00:04:15.274 "nvme_io_md": false, 00:04:15.274 "write_zeroes": true, 00:04:15.274 "zcopy": true, 00:04:15.274 "get_zone_info": false, 00:04:15.274 "zone_management": false, 00:04:15.274 "zone_append": false, 00:04:15.274 "compare": false, 00:04:15.274 "compare_and_write": false, 00:04:15.274 "abort": true, 00:04:15.274 "seek_hole": false, 00:04:15.274 "seek_data": false, 00:04:15.274 "copy": true, 00:04:15.274 "nvme_iov_md": false 00:04:15.274 }, 00:04:15.274 "memory_domains": [ 00:04:15.274 { 00:04:15.274 "dma_device_id": "system", 00:04:15.274 "dma_device_type": 1 00:04:15.274 }, 00:04:15.274 { 00:04:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.274 "dma_device_type": 2 00:04:15.274 } 00:04:15.274 ], 00:04:15.274 "driver_specific": { 00:04:15.274 "passthru": { 00:04:15.274 "name": "Passthru0", 00:04:15.274 "base_bdev_name": "Malloc0" 00:04:15.274 } 00:04:15.274 } 00:04:15.274 } 00:04:15.274 ]' 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 17:41:02 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.274 17:41:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.274 17:41:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.274 00:04:15.274 real 0m0.200s 00:04:15.274 user 0m0.111s 00:04:15.274 sys 0m0.035s 00:04:15.274 17:41:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.274 17:41:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 ************************************ 00:04:15.274 END TEST rpc_integrity 00:04:15.274 ************************************ 00:04:15.274 17:41:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.274 17:41:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.274 17:41:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.274 17:41:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 ************************************ 00:04:15.274 START TEST rpc_plugins 00:04:15.274 ************************************ 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:15.274 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.274 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.274 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.274 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.274 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.274 { 00:04:15.274 "name": "Malloc1", 00:04:15.274 "aliases": [ 00:04:15.274 "3206399b-5393-4a01-81bc-ffa904ea646c" 00:04:15.274 ], 00:04:15.274 "product_name": "Malloc disk", 00:04:15.274 "block_size": 4096, 00:04:15.274 "num_blocks": 256, 00:04:15.274 "uuid": "3206399b-5393-4a01-81bc-ffa904ea646c", 00:04:15.274 "assigned_rate_limits": { 00:04:15.274 "rw_ios_per_sec": 0, 00:04:15.274 "rw_mbytes_per_sec": 0, 00:04:15.274 "r_mbytes_per_sec": 0, 00:04:15.274 "w_mbytes_per_sec": 0 00:04:15.274 }, 00:04:15.274 "claimed": false, 00:04:15.274 "zoned": false, 00:04:15.274 "supported_io_types": { 00:04:15.274 "read": true, 00:04:15.274 "write": true, 00:04:15.274 "unmap": true, 00:04:15.274 "flush": true, 00:04:15.274 "reset": true, 00:04:15.274 "nvme_admin": false, 00:04:15.274 "nvme_io": false, 00:04:15.274 "nvme_io_md": false, 00:04:15.274 "write_zeroes": true, 00:04:15.274 "zcopy": true, 00:04:15.274 "get_zone_info": false, 00:04:15.274 "zone_management": false, 00:04:15.274 "zone_append": false, 00:04:15.274 "compare": false, 00:04:15.274 "compare_and_write": false, 00:04:15.274 "abort": true, 00:04:15.274 "seek_hole": false, 00:04:15.274 "seek_data": false, 00:04:15.274 "copy": true, 00:04:15.274 "nvme_iov_md": false 00:04:15.274 }, 00:04:15.274 "memory_domains": [ 00:04:15.274 { 00:04:15.274 "dma_device_id": "system", 00:04:15.274 "dma_device_type": 1 00:04:15.274 }, 00:04:15.274 { 00:04:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.274 "dma_device_type": 2 00:04:15.274 } 00:04:15.274 ], 00:04:15.274 "driver_specific": {} 00:04:15.274 } 00:04:15.274 ]' 00:04:15.274 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.534 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.534 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.534 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.534 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.534 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.534 17:41:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.534 00:04:15.534 real 0m0.105s 00:04:15.534 user 0m0.053s 00:04:15.534 sys 0m0.019s 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.534 17:41:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.534 ************************************ 00:04:15.534 END TEST rpc_plugins 00:04:15.534 ************************************ 00:04:15.534 17:41:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.534 17:41:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.534 17:41:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.534 17:41:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.534 ************************************ 00:04:15.534 START TEST rpc_trace_cmd_test 00:04:15.534 ************************************ 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.535 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2768119", 00:04:15.535 "tpoint_group_mask": "0x8", 00:04:15.535 "iscsi_conn": { 00:04:15.535 "mask": "0x2", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "scsi": { 00:04:15.535 "mask": "0x4", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "bdev": { 00:04:15.535 "mask": "0x8", 00:04:15.535 "tpoint_mask": "0xffffffffffffffff" 00:04:15.535 }, 00:04:15.535 "nvmf_rdma": { 00:04:15.535 "mask": "0x10", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "nvmf_tcp": { 00:04:15.535 "mask": "0x20", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "ftl": { 00:04:15.535 "mask": "0x40", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "blobfs": { 00:04:15.535 "mask": "0x80", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "dsa": { 00:04:15.535 "mask": "0x200", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "thread": { 00:04:15.535 "mask": "0x400", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "nvme_pcie": { 00:04:15.535 "mask": "0x800", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "iaa": { 00:04:15.535 "mask": "0x1000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "nvme_tcp": { 00:04:15.535 "mask": "0x2000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "bdev_nvme": { 00:04:15.535 "mask": "0x4000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "sock": { 00:04:15.535 "mask": "0x8000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "blob": { 00:04:15.535 "mask": "0x10000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "bdev_raid": { 00:04:15.535 "mask": "0x20000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 }, 00:04:15.535 "scheduler": { 00:04:15.535 "mask": "0x40000", 00:04:15.535 "tpoint_mask": "0x0" 00:04:15.535 } 00:04:15.535 }' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:15.535 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.795 17:41:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.795 00:04:15.795 real 0m0.158s 00:04:15.795 user 0m0.122s 00:04:15.795 sys 0m0.026s 00:04:15.795 17:41:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.795 17:41:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 ************************************ 00:04:15.795 END TEST rpc_trace_cmd_test 00:04:15.795 ************************************ 00:04:15.795 17:41:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.795 17:41:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.795 17:41:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.795 17:41:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.795 17:41:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.795 17:41:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 ************************************ 00:04:15.795 START TEST rpc_daemon_integrity 00:04:15.795 ************************************ 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.795 { 00:04:15.795 "name": "Malloc2", 00:04:15.795 "aliases": [ 00:04:15.795 "8a38f52b-6d39-4c9a-8f9b-5ac11b0baed1" 00:04:15.795 ], 00:04:15.795 "product_name": "Malloc disk", 00:04:15.795 "block_size": 512, 00:04:15.795 "num_blocks": 16384, 00:04:15.795 "uuid": "8a38f52b-6d39-4c9a-8f9b-5ac11b0baed1", 00:04:15.795 "assigned_rate_limits": { 00:04:15.795 "rw_ios_per_sec": 0, 00:04:15.795 "rw_mbytes_per_sec": 0, 00:04:15.795 "r_mbytes_per_sec": 0, 00:04:15.795 "w_mbytes_per_sec": 0 00:04:15.795 }, 00:04:15.795 "claimed": false, 00:04:15.795 "zoned": false, 00:04:15.795 "supported_io_types": { 00:04:15.795 "read": true, 00:04:15.795 "write": true, 00:04:15.795 "unmap": true, 00:04:15.795 "flush": true, 00:04:15.795 "reset": true, 00:04:15.795 "nvme_admin": false, 00:04:15.795 "nvme_io": false, 00:04:15.795 "nvme_io_md": false, 00:04:15.795 "write_zeroes": true, 00:04:15.795 "zcopy": true, 00:04:15.795 "get_zone_info": false, 00:04:15.795 "zone_management": false, 00:04:15.795 "zone_append": false, 00:04:15.795 "compare": false, 00:04:15.795 "compare_and_write": false, 00:04:15.795 "abort": true, 00:04:15.795 "seek_hole": false, 00:04:15.795 "seek_data": false, 00:04:15.795 "copy": true, 00:04:15.795 "nvme_iov_md": false 00:04:15.795 }, 00:04:15.795 "memory_domains": [ 00:04:15.795 { 00:04:15.795 "dma_device_id": "system", 00:04:15.795 "dma_device_type": 1 00:04:15.795 }, 00:04:15.795 { 00:04:15.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.795 "dma_device_type": 2 00:04:15.795 } 00:04:15.795 ], 00:04:15.795 "driver_specific": {} 00:04:15.795 } 00:04:15.795 ]' 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 [2024-12-06 17:41:03.526110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.795 [2024-12-06 17:41:03.526154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.795 [2024-12-06 17:41:03.526170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f45ae0 00:04:15.795 [2024-12-06 17:41:03.526177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.795 [2024-12-06 17:41:03.527701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.795 [2024-12-06 17:41:03.527739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.795 Passthru0 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.795 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.795 { 00:04:15.795 "name": "Malloc2", 00:04:15.795 "aliases": [ 00:04:15.795 "8a38f52b-6d39-4c9a-8f9b-5ac11b0baed1" 00:04:15.795 ], 00:04:15.795 "product_name": "Malloc disk", 00:04:15.795 "block_size": 512, 00:04:15.795 "num_blocks": 16384, 00:04:15.795 "uuid": "8a38f52b-6d39-4c9a-8f9b-5ac11b0baed1", 00:04:15.795 "assigned_rate_limits": { 00:04:15.795 "rw_ios_per_sec": 0, 00:04:15.795 "rw_mbytes_per_sec": 0, 00:04:15.795 "r_mbytes_per_sec": 0, 00:04:15.795 "w_mbytes_per_sec": 0 00:04:15.795 }, 00:04:15.795 "claimed": true, 00:04:15.795 "claim_type": "exclusive_write", 00:04:15.795 "zoned": false, 00:04:15.795 "supported_io_types": { 00:04:15.795 "read": true, 00:04:15.795 "write": true, 00:04:15.795 "unmap": true, 00:04:15.795 "flush": true, 00:04:15.795 "reset": true, 00:04:15.795 "nvme_admin": false, 00:04:15.795 "nvme_io": false, 00:04:15.795 "nvme_io_md": false, 00:04:15.795 "write_zeroes": true, 00:04:15.795 "zcopy": true, 00:04:15.795 "get_zone_info": false, 00:04:15.795 "zone_management": false, 00:04:15.795 "zone_append": false, 00:04:15.796 "compare": false, 00:04:15.796 "compare_and_write": false, 00:04:15.796 "abort": true, 00:04:15.796 "seek_hole": false, 00:04:15.796 "seek_data": false, 00:04:15.796 "copy": true, 00:04:15.796 "nvme_iov_md": false 00:04:15.796 }, 00:04:15.796 "memory_domains": [ 00:04:15.796 { 00:04:15.796 "dma_device_id": "system", 00:04:15.796 "dma_device_type": 1 00:04:15.796 }, 00:04:15.796 { 00:04:15.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.796 "dma_device_type": 2 00:04:15.796 } 00:04:15.796 ], 00:04:15.796 "driver_specific": {} 00:04:15.796 }, 00:04:15.796 { 00:04:15.796 "name": "Passthru0", 00:04:15.796 "aliases": [ 00:04:15.796 "db8fa22a-a8e0-5692-9567-01ac6aa29e25" 00:04:15.796 ], 00:04:15.796 "product_name": "passthru", 00:04:15.796 "block_size": 512, 00:04:15.796 "num_blocks": 16384, 00:04:15.796 "uuid": "db8fa22a-a8e0-5692-9567-01ac6aa29e25", 00:04:15.796 "assigned_rate_limits": { 00:04:15.796 "rw_ios_per_sec": 0, 00:04:15.796 "rw_mbytes_per_sec": 0, 00:04:15.796 "r_mbytes_per_sec": 0, 00:04:15.796 "w_mbytes_per_sec": 0 00:04:15.796 }, 00:04:15.796 "claimed": false, 00:04:15.796 "zoned": false, 00:04:15.796 "supported_io_types": { 00:04:15.796 "read": true, 00:04:15.796 "write": true, 00:04:15.796 "unmap": true, 00:04:15.796 "flush": true, 00:04:15.796 "reset": true, 00:04:15.796 "nvme_admin": false, 00:04:15.796 "nvme_io": false, 00:04:15.796 "nvme_io_md": false, 00:04:15.796 "write_zeroes": true, 00:04:15.796 "zcopy": true, 00:04:15.796 "get_zone_info": false, 00:04:15.796 "zone_management": false, 00:04:15.796 "zone_append": false, 00:04:15.796 "compare": false, 00:04:15.796 "compare_and_write": false, 00:04:15.796 "abort": true, 00:04:15.796 "seek_hole": false, 00:04:15.796 "seek_data": false, 00:04:15.796 "copy": true, 00:04:15.796 "nvme_iov_md": false 00:04:15.796 }, 00:04:15.796 "memory_domains": [ 00:04:15.796 { 00:04:15.796 "dma_device_id": "system", 00:04:15.796 "dma_device_type": 1 00:04:15.796 }, 00:04:15.796 { 00:04:15.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.796 "dma_device_type": 2 00:04:15.796 } 00:04:15.796 ], 00:04:15.796 "driver_specific": { 00:04:15.796 "passthru": { 00:04:15.796 "name": "Passthru0", 00:04:15.796 "base_bdev_name": "Malloc2" 00:04:15.796 } 00:04:15.796 } 00:04:15.796 } 00:04:15.796 ]' 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.796 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.055 17:41:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.055 00:04:16.055 real 0m0.205s 00:04:16.055 user 0m0.109s 00:04:16.055 sys 0m0.033s 00:04:16.055 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.055 17:41:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.055 ************************************ 00:04:16.055 END TEST rpc_daemon_integrity 00:04:16.055 ************************************ 00:04:16.055 17:41:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.055 17:41:03 rpc -- rpc/rpc.sh@84 -- # killprocess 2768119 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@954 -- # '[' -z 2768119 ']' 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@958 -- # kill -0 2768119 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2768119 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2768119' 00:04:16.055 killing process with pid 2768119 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@973 -- # kill 2768119 00:04:16.055 17:41:03 rpc -- common/autotest_common.sh@978 -- # wait 2768119 00:04:16.314 00:04:16.314 real 0m2.133s 00:04:16.314 user 0m2.553s 00:04:16.314 sys 0m0.674s 00:04:16.314 17:41:03 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.314 17:41:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.314 ************************************ 00:04:16.314 END TEST rpc 00:04:16.314 ************************************ 00:04:16.314 17:41:03 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:16.314 17:41:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.314 17:41:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.314 17:41:03 -- common/autotest_common.sh@10 -- # set +x 00:04:16.314 ************************************ 00:04:16.314 START TEST skip_rpc 00:04:16.314 ************************************ 00:04:16.314 17:41:03 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:16.314 * Looking for test storage... 00:04:16.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.314 17:41:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.314 --rc genhtml_branch_coverage=1 00:04:16.314 --rc genhtml_function_coverage=1 00:04:16.314 --rc genhtml_legend=1 00:04:16.314 --rc geninfo_all_blocks=1 00:04:16.314 --rc geninfo_unexecuted_blocks=1 00:04:16.314 00:04:16.314 ' 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.314 --rc genhtml_branch_coverage=1 00:04:16.314 --rc genhtml_function_coverage=1 00:04:16.314 --rc genhtml_legend=1 00:04:16.314 --rc geninfo_all_blocks=1 00:04:16.314 --rc geninfo_unexecuted_blocks=1 00:04:16.314 00:04:16.314 ' 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.314 --rc genhtml_branch_coverage=1 00:04:16.314 --rc genhtml_function_coverage=1 00:04:16.314 --rc genhtml_legend=1 00:04:16.314 --rc geninfo_all_blocks=1 00:04:16.314 --rc geninfo_unexecuted_blocks=1 00:04:16.314 00:04:16.314 ' 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.314 --rc genhtml_branch_coverage=1 00:04:16.314 --rc genhtml_function_coverage=1 00:04:16.314 --rc genhtml_legend=1 00:04:16.314 --rc geninfo_all_blocks=1 00:04:16.314 --rc geninfo_unexecuted_blocks=1 00:04:16.314 00:04:16.314 ' 00:04:16.314 17:41:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.314 17:41:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.314 17:41:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.314 17:41:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.314 ************************************ 00:04:16.314 START TEST skip_rpc 00:04:16.314 ************************************ 00:04:16.314 17:41:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:16.314 17:41:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2768688 00:04:16.314 17:41:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.314 17:41:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.314 17:41:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.573 [2024-12-06 17:41:04.170304] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:16.573 [2024-12-06 17:41:04.170351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768688 ] 00:04:16.573 [2024-12-06 17:41:04.234535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.573 [2024-12-06 17:41:04.264556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.841 17:41:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2768688 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2768688 ']' 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2768688 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2768688 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2768688' 00:04:21.842 killing process with pid 2768688 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2768688 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2768688 00:04:21.842 00:04:21.842 real 0m5.238s 00:04:21.842 user 0m5.063s 00:04:21.842 sys 0m0.207s 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.842 17:41:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.842 ************************************ 00:04:21.842 END TEST skip_rpc 00:04:21.842 ************************************ 00:04:21.842 17:41:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.842 17:41:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.842 17:41:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.842 17:41:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.842 ************************************ 00:04:21.842 START TEST skip_rpc_with_json 00:04:21.842 ************************************ 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2770032 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2770032 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2770032 ']' 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.842 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.842 [2024-12-06 17:41:09.454575] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:21.842 [2024-12-06 17:41:09.454624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770032 ] 00:04:21.842 [2024-12-06 17:41:09.520665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.842 [2024-12-06 17:41:09.552141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.101 [2024-12-06 17:41:09.719834] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:22.101 request: 00:04:22.101 { 00:04:22.101 "trtype": "tcp", 00:04:22.101 "method": "nvmf_get_transports", 00:04:22.101 "req_id": 1 00:04:22.101 } 00:04:22.101 Got JSON-RPC error response 00:04:22.101 response: 00:04:22.101 { 00:04:22.101 "code": -19, 00:04:22.101 "message": "No such device" 00:04:22.101 } 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.101 [2024-12-06 17:41:09.727922] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.101 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.101 { 00:04:22.101 "subsystems": [ 00:04:22.101 { 00:04:22.101 "subsystem": "fsdev", 00:04:22.101 "config": [ 00:04:22.101 { 00:04:22.101 "method": "fsdev_set_opts", 00:04:22.101 "params": { 00:04:22.101 "fsdev_io_pool_size": 65535, 00:04:22.101 "fsdev_io_cache_size": 256 00:04:22.101 } 00:04:22.101 } 00:04:22.101 ] 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "vfio_user_target", 00:04:22.101 "config": null 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "keyring", 00:04:22.101 "config": [] 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "iobuf", 00:04:22.101 "config": [ 00:04:22.101 { 00:04:22.101 "method": "iobuf_set_options", 00:04:22.101 "params": { 00:04:22.101 "small_pool_count": 8192, 00:04:22.101 "large_pool_count": 1024, 00:04:22.101 "small_bufsize": 8192, 00:04:22.101 "large_bufsize": 135168, 00:04:22.101 "enable_numa": false 00:04:22.101 } 00:04:22.101 } 00:04:22.101 ] 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "sock", 00:04:22.101 "config": [ 00:04:22.101 { 00:04:22.101 "method": "sock_set_default_impl", 00:04:22.101 "params": { 00:04:22.101 "impl_name": "posix" 00:04:22.101 } 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "method": "sock_impl_set_options", 00:04:22.101 "params": { 00:04:22.101 "impl_name": "ssl", 00:04:22.101 "recv_buf_size": 4096, 00:04:22.101 "send_buf_size": 4096, 00:04:22.101 "enable_recv_pipe": true, 00:04:22.101 "enable_quickack": false, 00:04:22.101 "enable_placement_id": 0, 00:04:22.101 "enable_zerocopy_send_server": true, 00:04:22.101 "enable_zerocopy_send_client": false, 00:04:22.101 "zerocopy_threshold": 0, 00:04:22.101 "tls_version": 0, 00:04:22.101 "enable_ktls": false 00:04:22.101 } 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "method": "sock_impl_set_options", 00:04:22.101 "params": { 00:04:22.101 "impl_name": "posix", 00:04:22.101 "recv_buf_size": 2097152, 00:04:22.101 "send_buf_size": 2097152, 00:04:22.101 "enable_recv_pipe": true, 00:04:22.101 "enable_quickack": false, 00:04:22.101 "enable_placement_id": 0, 00:04:22.101 "enable_zerocopy_send_server": true, 00:04:22.101 "enable_zerocopy_send_client": false, 00:04:22.101 "zerocopy_threshold": 0, 00:04:22.101 "tls_version": 0, 00:04:22.101 "enable_ktls": false 00:04:22.101 } 00:04:22.101 } 00:04:22.101 ] 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "vmd", 00:04:22.101 "config": [] 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "accel", 00:04:22.101 "config": [ 00:04:22.101 { 00:04:22.101 "method": "accel_set_options", 00:04:22.101 "params": { 00:04:22.101 "small_cache_size": 128, 00:04:22.101 "large_cache_size": 16, 00:04:22.101 "task_count": 2048, 00:04:22.101 "sequence_count": 2048, 00:04:22.101 "buf_count": 2048 00:04:22.101 } 00:04:22.101 } 00:04:22.101 ] 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "subsystem": "bdev", 00:04:22.101 "config": [ 00:04:22.101 { 00:04:22.101 "method": "bdev_set_options", 00:04:22.101 "params": { 00:04:22.101 "bdev_io_pool_size": 65535, 00:04:22.101 "bdev_io_cache_size": 256, 00:04:22.101 "bdev_auto_examine": true, 00:04:22.101 "iobuf_small_cache_size": 128, 00:04:22.101 "iobuf_large_cache_size": 16 00:04:22.101 } 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "method": "bdev_raid_set_options", 00:04:22.101 "params": { 00:04:22.101 "process_window_size_kb": 1024, 00:04:22.101 "process_max_bandwidth_mb_sec": 0 00:04:22.101 } 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "method": "bdev_iscsi_set_options", 00:04:22.101 "params": { 00:04:22.101 "timeout_sec": 30 00:04:22.101 } 00:04:22.101 }, 00:04:22.101 { 00:04:22.101 "method": "bdev_nvme_set_options", 00:04:22.101 "params": { 00:04:22.101 "action_on_timeout": "none", 00:04:22.101 "timeout_us": 0, 00:04:22.101 "timeout_admin_us": 0, 00:04:22.101 "keep_alive_timeout_ms": 10000, 00:04:22.101 "arbitration_burst": 0, 00:04:22.101 "low_priority_weight": 0, 00:04:22.101 "medium_priority_weight": 0, 00:04:22.101 "high_priority_weight": 0, 00:04:22.101 "nvme_adminq_poll_period_us": 10000, 00:04:22.101 "nvme_ioq_poll_period_us": 0, 00:04:22.101 "io_queue_requests": 0, 00:04:22.101 "delay_cmd_submit": true, 00:04:22.101 "transport_retry_count": 4, 00:04:22.101 "bdev_retry_count": 3, 00:04:22.101 "transport_ack_timeout": 0, 00:04:22.101 "ctrlr_loss_timeout_sec": 0, 00:04:22.101 "reconnect_delay_sec": 0, 00:04:22.101 "fast_io_fail_timeout_sec": 0, 00:04:22.101 "disable_auto_failback": false, 00:04:22.101 "generate_uuids": false, 00:04:22.102 "transport_tos": 0, 00:04:22.102 "nvme_error_stat": false, 00:04:22.102 "rdma_srq_size": 0, 00:04:22.102 "io_path_stat": false, 00:04:22.102 "allow_accel_sequence": false, 00:04:22.102 "rdma_max_cq_size": 0, 00:04:22.102 "rdma_cm_event_timeout_ms": 0, 00:04:22.102 "dhchap_digests": [ 00:04:22.102 "sha256", 00:04:22.102 "sha384", 00:04:22.102 "sha512" 00:04:22.102 ], 00:04:22.102 "dhchap_dhgroups": [ 00:04:22.102 "null", 00:04:22.102 "ffdhe2048", 00:04:22.102 "ffdhe3072", 00:04:22.102 "ffdhe4096", 00:04:22.102 "ffdhe6144", 00:04:22.102 "ffdhe8192" 00:04:22.102 ], 00:04:22.102 "rdma_umr_per_io": false 00:04:22.102 } 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "method": "bdev_nvme_set_hotplug", 00:04:22.102 "params": { 00:04:22.102 "period_us": 100000, 00:04:22.102 "enable": false 00:04:22.102 } 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "method": "bdev_wait_for_examine" 00:04:22.102 } 00:04:22.102 ] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "scsi", 00:04:22.102 "config": null 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "scheduler", 00:04:22.102 "config": [ 00:04:22.102 { 00:04:22.102 "method": "framework_set_scheduler", 00:04:22.102 "params": { 00:04:22.102 "name": "static" 00:04:22.102 } 00:04:22.102 } 00:04:22.102 ] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "vhost_scsi", 00:04:22.102 "config": [] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "vhost_blk", 00:04:22.102 "config": [] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "ublk", 00:04:22.102 "config": [] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "nbd", 00:04:22.102 "config": [] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "nvmf", 00:04:22.102 "config": [ 00:04:22.102 { 00:04:22.102 "method": "nvmf_set_config", 00:04:22.102 "params": { 00:04:22.102 "discovery_filter": "match_any", 00:04:22.102 "admin_cmd_passthru": { 00:04:22.102 "identify_ctrlr": false 00:04:22.102 }, 00:04:22.102 "dhchap_digests": [ 00:04:22.102 "sha256", 00:04:22.102 "sha384", 00:04:22.102 "sha512" 00:04:22.102 ], 00:04:22.102 "dhchap_dhgroups": [ 00:04:22.102 "null", 00:04:22.102 "ffdhe2048", 00:04:22.102 "ffdhe3072", 00:04:22.102 "ffdhe4096", 00:04:22.102 "ffdhe6144", 00:04:22.102 "ffdhe8192" 00:04:22.102 ] 00:04:22.102 } 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "method": "nvmf_set_max_subsystems", 00:04:22.102 "params": { 00:04:22.102 "max_subsystems": 1024 00:04:22.102 } 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "method": "nvmf_set_crdt", 00:04:22.102 "params": { 00:04:22.102 "crdt1": 0, 00:04:22.102 "crdt2": 0, 00:04:22.102 "crdt3": 0 00:04:22.102 } 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "method": "nvmf_create_transport", 00:04:22.102 "params": { 00:04:22.102 "trtype": "TCP", 00:04:22.102 "max_queue_depth": 128, 00:04:22.102 "max_io_qpairs_per_ctrlr": 127, 00:04:22.102 "in_capsule_data_size": 4096, 00:04:22.102 "max_io_size": 131072, 00:04:22.102 "io_unit_size": 131072, 00:04:22.102 "max_aq_depth": 128, 00:04:22.102 "num_shared_buffers": 511, 00:04:22.102 "buf_cache_size": 4294967295, 00:04:22.102 "dif_insert_or_strip": false, 00:04:22.102 "zcopy": false, 00:04:22.102 "c2h_success": true, 00:04:22.102 "sock_priority": 0, 00:04:22.102 "abort_timeout_sec": 1, 00:04:22.102 "ack_timeout": 0, 00:04:22.102 "data_wr_pool_size": 0 00:04:22.102 } 00:04:22.102 } 00:04:22.102 ] 00:04:22.102 }, 00:04:22.102 { 00:04:22.102 "subsystem": "iscsi", 00:04:22.102 "config": [ 00:04:22.102 { 00:04:22.102 "method": "iscsi_set_options", 00:04:22.102 "params": { 00:04:22.102 "node_base": "iqn.2016-06.io.spdk", 00:04:22.102 "max_sessions": 128, 00:04:22.102 "max_connections_per_session": 2, 00:04:22.102 "max_queue_depth": 64, 00:04:22.102 "default_time2wait": 2, 00:04:22.102 "default_time2retain": 20, 00:04:22.102 "first_burst_length": 8192, 00:04:22.102 "immediate_data": true, 00:04:22.102 "allow_duplicated_isid": false, 00:04:22.102 "error_recovery_level": 0, 00:04:22.102 "nop_timeout": 60, 00:04:22.102 "nop_in_interval": 30, 00:04:22.102 "disable_chap": false, 00:04:22.102 "require_chap": false, 00:04:22.102 "mutual_chap": false, 00:04:22.102 "chap_group": 0, 00:04:22.102 "max_large_datain_per_connection": 64, 00:04:22.102 "max_r2t_per_connection": 4, 00:04:22.102 "pdu_pool_size": 36864, 00:04:22.102 "immediate_data_pool_size": 16384, 00:04:22.102 "data_out_pool_size": 2048 00:04:22.102 } 00:04:22.102 } 00:04:22.102 ] 00:04:22.102 } 00:04:22.102 ] 00:04:22.102 } 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2770032 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2770032 ']' 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2770032 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.102 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2770032 00:04:22.361 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.361 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.361 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2770032' 00:04:22.361 killing process with pid 2770032 00:04:22.361 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2770032 00:04:22.361 17:41:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2770032 00:04:22.361 17:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2770060 00:04:22.361 17:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.361 17:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2770060 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2770060 ']' 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2770060 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2770060 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2770060' 00:04:27.664 killing process with pid 2770060 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2770060 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2770060 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:27.664 00:04:27.664 real 0m5.948s 00:04:27.664 user 0m5.742s 00:04:27.664 sys 0m0.444s 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.664 ************************************ 00:04:27.664 END TEST skip_rpc_with_json 00:04:27.664 ************************************ 00:04:27.664 17:41:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:27.664 17:41:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.664 17:41:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.664 17:41:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.664 ************************************ 00:04:27.664 START TEST skip_rpc_with_delay 00:04:27.664 ************************************ 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.664 [2024-12-06 17:41:15.452262] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.664 00:04:27.664 real 0m0.058s 00:04:27.664 user 0m0.037s 00:04:27.664 sys 0m0.020s 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.664 17:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:27.664 ************************************ 00:04:27.664 END TEST skip_rpc_with_delay 00:04:27.664 ************************************ 00:04:27.664 17:41:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:27.922 17:41:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:27.922 17:41:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:27.922 17:41:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.922 17:41:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.922 17:41:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.922 ************************************ 00:04:27.922 START TEST exit_on_failed_rpc_init 00:04:27.922 ************************************ 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2771445 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2771445 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2771445 ']' 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.922 17:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.922 [2024-12-06 17:41:15.552989] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:27.922 [2024-12-06 17:41:15.553038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771445 ] 00:04:27.922 [2024-12-06 17:41:15.617517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.922 [2024-12-06 17:41:15.647470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.180 17:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.180 [2024-12-06 17:41:15.853267] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:28.180 [2024-12-06 17:41:15.853315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771451 ] 00:04:28.180 [2024-12-06 17:41:15.930305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.180 [2024-12-06 17:41:15.966266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.180 [2024-12-06 17:41:15.966319] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:28.180 [2024-12-06 17:41:15.966329] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:28.180 [2024-12-06 17:41:15.966336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2771445 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2771445 ']' 00:04:28.180 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2771445 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2771445 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2771445' 00:04:28.438 killing process with pid 2771445 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2771445 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2771445 00:04:28.438 00:04:28.438 real 0m0.724s 00:04:28.438 user 0m0.814s 00:04:28.438 sys 0m0.292s 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.438 17:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.438 ************************************ 00:04:28.438 END TEST exit_on_failed_rpc_init 00:04:28.438 ************************************ 00:04:28.438 17:41:16 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.697 00:04:28.697 real 0m12.281s 00:04:28.697 user 0m11.803s 00:04:28.697 sys 0m1.146s 00:04:28.697 17:41:16 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.697 17:41:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.697 ************************************ 00:04:28.697 END TEST skip_rpc 00:04:28.697 ************************************ 00:04:28.697 17:41:16 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.697 17:41:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.697 17:41:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.697 17:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:28.697 ************************************ 00:04:28.697 START TEST rpc_client 00:04:28.697 ************************************ 00:04:28.697 17:41:16 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.697 * Looking for test storage... 00:04:28.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.697 17:41:16 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.697 17:41:16 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.697 17:41:16 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.697 17:41:16 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.697 17:41:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.698 17:41:16 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.698 --rc genhtml_branch_coverage=1 00:04:28.698 --rc genhtml_function_coverage=1 00:04:28.698 --rc genhtml_legend=1 00:04:28.698 --rc geninfo_all_blocks=1 00:04:28.698 --rc geninfo_unexecuted_blocks=1 00:04:28.698 00:04:28.698 ' 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.698 --rc genhtml_branch_coverage=1 00:04:28.698 --rc genhtml_function_coverage=1 00:04:28.698 --rc genhtml_legend=1 00:04:28.698 --rc geninfo_all_blocks=1 00:04:28.698 --rc geninfo_unexecuted_blocks=1 00:04:28.698 00:04:28.698 ' 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.698 --rc genhtml_branch_coverage=1 00:04:28.698 --rc genhtml_function_coverage=1 00:04:28.698 --rc genhtml_legend=1 00:04:28.698 --rc geninfo_all_blocks=1 00:04:28.698 --rc geninfo_unexecuted_blocks=1 00:04:28.698 00:04:28.698 ' 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.698 --rc genhtml_branch_coverage=1 00:04:28.698 --rc genhtml_function_coverage=1 00:04:28.698 --rc genhtml_legend=1 00:04:28.698 --rc geninfo_all_blocks=1 00:04:28.698 --rc geninfo_unexecuted_blocks=1 00:04:28.698 00:04:28.698 ' 00:04:28.698 17:41:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.698 OK 00:04:28.698 17:41:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.698 00:04:28.698 real 0m0.134s 00:04:28.698 user 0m0.079s 00:04:28.698 sys 0m0.061s 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.698 17:41:16 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.698 ************************************ 00:04:28.698 END TEST rpc_client 00:04:28.698 ************************************ 00:04:28.698 17:41:16 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.698 17:41:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.698 17:41:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.698 17:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:28.698 ************************************ 00:04:28.698 START TEST json_config 00:04:28.698 ************************************ 00:04:28.698 17:41:16 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.958 17:41:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.958 17:41:16 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.958 17:41:16 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.958 17:41:16 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.958 17:41:16 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.958 17:41:16 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:28.958 17:41:16 json_config -- scripts/common.sh@345 -- # : 1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.958 17:41:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.958 17:41:16 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@353 -- # local d=1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.958 17:41:16 json_config -- scripts/common.sh@355 -- # echo 1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.958 17:41:16 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@353 -- # local d=2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.958 17:41:16 json_config -- scripts/common.sh@355 -- # echo 2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.958 17:41:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.958 17:41:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.958 17:41:16 json_config -- scripts/common.sh@368 -- # return 0 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.958 --rc genhtml_branch_coverage=1 00:04:28.958 --rc genhtml_function_coverage=1 00:04:28.958 --rc genhtml_legend=1 00:04:28.958 --rc geninfo_all_blocks=1 00:04:28.958 --rc geninfo_unexecuted_blocks=1 00:04:28.958 00:04:28.958 ' 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.958 --rc genhtml_branch_coverage=1 00:04:28.958 --rc genhtml_function_coverage=1 00:04:28.958 --rc genhtml_legend=1 00:04:28.958 --rc geninfo_all_blocks=1 00:04:28.958 --rc geninfo_unexecuted_blocks=1 00:04:28.958 00:04:28.958 ' 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.958 --rc genhtml_branch_coverage=1 00:04:28.958 --rc genhtml_function_coverage=1 00:04:28.958 --rc genhtml_legend=1 00:04:28.958 --rc geninfo_all_blocks=1 00:04:28.958 --rc geninfo_unexecuted_blocks=1 00:04:28.958 00:04:28.958 ' 00:04:28.958 17:41:16 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.958 --rc genhtml_branch_coverage=1 00:04:28.958 --rc genhtml_function_coverage=1 00:04:28.958 --rc genhtml_legend=1 00:04:28.958 --rc geninfo_all_blocks=1 00:04:28.958 --rc geninfo_unexecuted_blocks=1 00:04:28.958 00:04:28.958 ' 00:04:28.958 17:41:16 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.958 17:41:16 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.958 17:41:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:28.958 17:41:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.958 17:41:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.958 17:41:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.958 17:41:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.958 17:41:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.959 17:41:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.959 17:41:16 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.959 17:41:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@51 -- # : 0 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.959 17:41:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:28.959 INFO: JSON configuration test init 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.959 17:41:16 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.959 17:41:16 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.959 17:41:16 json_config -- json_config/common.sh@10 -- # shift 00:04:28.959 17:41:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.959 17:41:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.959 17:41:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.959 17:41:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.959 17:41:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.959 17:41:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2771707 00:04:28.959 17:41:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.959 Waiting for target to run... 00:04:28.959 17:41:16 json_config -- json_config/common.sh@25 -- # waitforlisten 2771707 /var/tmp/spdk_tgt.sock 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@835 -- # '[' -z 2771707 ']' 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.959 17:41:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.959 17:41:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.959 [2024-12-06 17:41:16.678952] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:28.959 [2024-12-06 17:41:16.679028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771707 ] 00:04:29.527 [2024-12-06 17:41:17.071669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.527 [2024-12-06 17:41:17.095682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.786 17:41:17 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.786 17:41:17 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:29.786 17:41:17 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.786 00:04:29.786 17:41:17 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:29.786 17:41:17 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:29.786 17:41:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.786 17:41:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.786 17:41:17 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:29.786 17:41:17 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:29.786 17:41:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.786 17:41:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.786 17:41:17 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:29.786 17:41:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.786 17:41:17 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:30.354 17:41:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.354 17:41:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:30.354 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:30.354 17:41:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@54 -- # sort 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:30.612 17:41:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.612 17:41:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:30.612 17:41:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.612 17:41:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.612 17:41:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:30.613 17:41:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:30.613 17:41:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.613 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.613 MallocForNvmf0 00:04:30.613 17:41:18 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.613 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.878 MallocForNvmf1 00:04:30.878 17:41:18 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.878 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.878 [2024-12-06 17:41:18.661088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.878 17:41:18 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.878 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:31.137 17:41:18 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.137 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.395 17:41:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.395 17:41:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.395 17:41:19 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.396 17:41:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.654 [2024-12-06 17:41:19.267023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.654 17:41:19 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.654 17:41:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.654 17:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.654 17:41:19 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.654 17:41:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.654 17:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.654 17:41:19 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.654 17:41:19 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.654 17:41:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.654 MallocBdevForConfigChangeCheck 00:04:31.913 17:41:19 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.913 17:41:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.913 17:41:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.913 17:41:19 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.913 17:41:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.171 17:41:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:32.171 INFO: shutting down applications... 00:04:32.171 17:41:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:32.171 17:41:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:32.171 17:41:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:32.171 17:41:19 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.431 Calling clear_iscsi_subsystem 00:04:32.431 Calling clear_nvmf_subsystem 00:04:32.431 Calling clear_nbd_subsystem 00:04:32.431 Calling clear_ublk_subsystem 00:04:32.431 Calling clear_vhost_blk_subsystem 00:04:32.431 Calling clear_vhost_scsi_subsystem 00:04:32.431 Calling clear_bdev_subsystem 00:04:32.431 17:41:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:32.431 17:41:20 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:32.431 17:41:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:32.431 17:41:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:32.431 17:41:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.431 17:41:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.999 17:41:20 json_config -- json_config/json_config.sh@352 -- # break 00:04:32.999 17:41:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:32.999 17:41:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:32.999 17:41:20 json_config -- json_config/common.sh@31 -- # local app=target 00:04:32.999 17:41:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:32.999 17:41:20 json_config -- json_config/common.sh@35 -- # [[ -n 2771707 ]] 00:04:32.999 17:41:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2771707 00:04:32.999 17:41:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:32.999 17:41:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.999 17:41:20 json_config -- json_config/common.sh@41 -- # kill -0 2771707 00:04:32.999 17:41:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.258 17:41:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.258 17:41:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.258 17:41:21 json_config -- json_config/common.sh@41 -- # kill -0 2771707 00:04:33.258 17:41:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.258 17:41:21 json_config -- json_config/common.sh@43 -- # break 00:04:33.258 17:41:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.258 17:41:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.258 SPDK target shutdown done 00:04:33.258 17:41:21 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:33.258 INFO: relaunching applications... 00:04:33.258 17:41:21 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.258 17:41:21 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.258 17:41:21 json_config -- json_config/common.sh@10 -- # shift 00:04:33.258 17:41:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.258 17:41:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.258 17:41:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.258 17:41:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.258 17:41:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.258 17:41:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2772761 00:04:33.258 17:41:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.258 Waiting for target to run... 00:04:33.258 17:41:21 json_config -- json_config/common.sh@25 -- # waitforlisten 2772761 /var/tmp/spdk_tgt.sock 00:04:33.258 17:41:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 2772761 ']' 00:04:33.258 17:41:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.258 17:41:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.258 17:41:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.258 17:41:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.258 17:41:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.258 17:41:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.258 [2024-12-06 17:41:21.064642] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:33.258 [2024-12-06 17:41:21.064702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772761 ] 00:04:33.827 [2024-12-06 17:41:21.464469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.827 [2024-12-06 17:41:21.497317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.411 [2024-12-06 17:41:21.998377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.411 [2024-12-06 17:41:22.030737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.411 17:41:22 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.411 17:41:22 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:34.411 17:41:22 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.411 00:04:34.411 17:41:22 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.411 17:41:22 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.411 INFO: Checking if target configuration is the same... 00:04:34.411 17:41:22 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.411 17:41:22 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.411 17:41:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.411 + '[' 2 -ne 2 ']' 00:04:34.411 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.411 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.411 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.411 +++ basename /dev/fd/62 00:04:34.411 ++ mktemp /tmp/62.XXX 00:04:34.411 + tmp_file_1=/tmp/62.ANG 00:04:34.411 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.411 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.411 + tmp_file_2=/tmp/spdk_tgt_config.json.cZ1 00:04:34.411 + ret=0 00:04:34.411 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.670 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.670 + diff -u /tmp/62.ANG /tmp/spdk_tgt_config.json.cZ1 00:04:34.670 + echo 'INFO: JSON config files are the same' 00:04:34.670 INFO: JSON config files are the same 00:04:34.670 + rm /tmp/62.ANG /tmp/spdk_tgt_config.json.cZ1 00:04:34.670 + exit 0 00:04:34.670 17:41:22 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:34.670 17:41:22 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.670 INFO: changing configuration and checking if this can be detected... 00:04:34.670 17:41:22 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.670 17:41:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.930 17:41:22 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.930 17:41:22 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:34.930 17:41:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.930 + '[' 2 -ne 2 ']' 00:04:34.930 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:34.930 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:34.930 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:34.930 +++ basename /dev/fd/62 00:04:34.930 ++ mktemp /tmp/62.XXX 00:04:34.930 + tmp_file_1=/tmp/62.3Vj 00:04:34.930 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.930 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.930 + tmp_file_2=/tmp/spdk_tgt_config.json.Iie 00:04:34.930 + ret=0 00:04:34.930 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.190 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:35.190 + diff -u /tmp/62.3Vj /tmp/spdk_tgt_config.json.Iie 00:04:35.190 + ret=1 00:04:35.190 + echo '=== Start of file: /tmp/62.3Vj ===' 00:04:35.190 + cat /tmp/62.3Vj 00:04:35.190 + echo '=== End of file: /tmp/62.3Vj ===' 00:04:35.190 + echo '' 00:04:35.190 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Iie ===' 00:04:35.190 + cat /tmp/spdk_tgt_config.json.Iie 00:04:35.190 + echo '=== End of file: /tmp/spdk_tgt_config.json.Iie ===' 00:04:35.190 + echo '' 00:04:35.190 + rm /tmp/62.3Vj /tmp/spdk_tgt_config.json.Iie 00:04:35.190 + exit 1 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:35.190 INFO: configuration change detected. 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 2772761 ]] 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.190 17:41:22 json_config -- json_config/json_config.sh@330 -- # killprocess 2772761 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@954 -- # '[' -z 2772761 ']' 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@958 -- # kill -0 2772761 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@959 -- # uname 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2772761 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2772761' 00:04:35.190 killing process with pid 2772761 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@973 -- # kill 2772761 00:04:35.190 17:41:22 json_config -- common/autotest_common.sh@978 -- # wait 2772761 00:04:35.449 17:41:23 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:35.449 17:41:23 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:35.449 17:41:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.449 17:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.449 17:41:23 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:35.449 17:41:23 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:35.449 INFO: Success 00:04:35.449 00:04:35.449 real 0m6.762s 00:04:35.449 user 0m7.727s 00:04:35.449 sys 0m1.773s 00:04:35.449 17:41:23 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.449 17:41:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.449 ************************************ 00:04:35.449 END TEST json_config 00:04:35.449 ************************************ 00:04:35.709 17:41:23 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:35.709 17:41:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.709 17:41:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.709 17:41:23 -- common/autotest_common.sh@10 -- # set +x 00:04:35.709 ************************************ 00:04:35.709 START TEST json_config_extra_key 00:04:35.709 ************************************ 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.709 17:41:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.709 --rc genhtml_branch_coverage=1 00:04:35.709 --rc genhtml_function_coverage=1 00:04:35.709 --rc genhtml_legend=1 00:04:35.709 --rc geninfo_all_blocks=1 00:04:35.709 --rc geninfo_unexecuted_blocks=1 00:04:35.709 00:04:35.709 ' 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.709 --rc genhtml_branch_coverage=1 00:04:35.709 --rc genhtml_function_coverage=1 00:04:35.709 --rc genhtml_legend=1 00:04:35.709 --rc geninfo_all_blocks=1 00:04:35.709 --rc geninfo_unexecuted_blocks=1 00:04:35.709 00:04:35.709 ' 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.709 --rc genhtml_branch_coverage=1 00:04:35.709 --rc genhtml_function_coverage=1 00:04:35.709 --rc genhtml_legend=1 00:04:35.709 --rc geninfo_all_blocks=1 00:04:35.709 --rc geninfo_unexecuted_blocks=1 00:04:35.709 00:04:35.709 ' 00:04:35.709 17:41:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.709 --rc genhtml_branch_coverage=1 00:04:35.709 --rc genhtml_function_coverage=1 00:04:35.709 --rc genhtml_legend=1 00:04:35.709 --rc geninfo_all_blocks=1 00:04:35.709 --rc geninfo_unexecuted_blocks=1 00:04:35.709 00:04:35.709 ' 00:04:35.709 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:35.709 17:41:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:35.709 17:41:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:35.709 17:41:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:35.709 17:41:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:35.709 17:41:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:35.710 17:41:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:35.710 17:41:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:35.710 17:41:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:35.710 17:41:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:35.710 17:41:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.710 17:41:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.710 17:41:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.710 17:41:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:35.710 17:41:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:35.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:35.710 17:41:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:35.710 INFO: launching applications... 00:04:35.710 17:41:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2773505 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:35.710 Waiting for target to run... 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2773505 /var/tmp/spdk_tgt.sock 00:04:35.710 17:41:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2773505 ']' 00:04:35.710 17:41:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.710 17:41:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.710 17:41:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.710 17:41:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.710 17:41:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:35.710 17:41:23 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:35.710 [2024-12-06 17:41:23.468690] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:35.710 [2024-12-06 17:41:23.468761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773505 ] 00:04:35.969 [2024-12-06 17:41:23.779020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.227 [2024-12-06 17:41:23.807528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.485 17:41:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.485 17:41:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:36.485 00:04:36.485 17:41:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:36.485 INFO: shutting down applications... 00:04:36.485 17:41:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2773505 ]] 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2773505 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2773505 00:04:36.485 17:41:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2773505 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.054 17:41:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.054 SPDK target shutdown done 00:04:37.054 17:41:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.054 Success 00:04:37.054 00:04:37.054 real 0m1.449s 00:04:37.054 user 0m1.037s 00:04:37.054 sys 0m0.403s 00:04:37.054 17:41:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.054 17:41:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.054 ************************************ 00:04:37.054 END TEST json_config_extra_key 00:04:37.054 ************************************ 00:04:37.054 17:41:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.054 17:41:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.054 17:41:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.054 17:41:24 -- common/autotest_common.sh@10 -- # set +x 00:04:37.054 ************************************ 00:04:37.054 START TEST alias_rpc 00:04:37.054 ************************************ 00:04:37.054 17:41:24 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.054 * Looking for test storage... 00:04:37.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:37.054 17:41:24 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.054 17:41:24 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.054 17:41:24 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.313 17:41:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.313 --rc genhtml_branch_coverage=1 00:04:37.313 --rc genhtml_function_coverage=1 00:04:37.313 --rc genhtml_legend=1 00:04:37.313 --rc geninfo_all_blocks=1 00:04:37.313 --rc geninfo_unexecuted_blocks=1 00:04:37.313 00:04:37.313 ' 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.313 --rc genhtml_branch_coverage=1 00:04:37.313 --rc genhtml_function_coverage=1 00:04:37.313 --rc genhtml_legend=1 00:04:37.313 --rc geninfo_all_blocks=1 00:04:37.313 --rc geninfo_unexecuted_blocks=1 00:04:37.313 00:04:37.313 ' 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.313 --rc genhtml_branch_coverage=1 00:04:37.313 --rc genhtml_function_coverage=1 00:04:37.313 --rc genhtml_legend=1 00:04:37.313 --rc geninfo_all_blocks=1 00:04:37.313 --rc geninfo_unexecuted_blocks=1 00:04:37.313 00:04:37.313 ' 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.313 --rc genhtml_branch_coverage=1 00:04:37.313 --rc genhtml_function_coverage=1 00:04:37.313 --rc genhtml_legend=1 00:04:37.313 --rc geninfo_all_blocks=1 00:04:37.313 --rc geninfo_unexecuted_blocks=1 00:04:37.313 00:04:37.313 ' 00:04:37.313 17:41:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.313 17:41:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2773896 00:04:37.313 17:41:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2773896 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2773896 ']' 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.313 17:41:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.313 17:41:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.313 [2024-12-06 17:41:24.962391] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:37.313 [2024-12-06 17:41:24.962458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2773896 ] 00:04:37.313 [2024-12-06 17:41:25.032415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.313 [2024-12-06 17:41:25.068734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.248 17:41:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:38.248 17:41:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2773896 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2773896 ']' 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2773896 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2773896 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2773896' 00:04:38.248 killing process with pid 2773896 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@973 -- # kill 2773896 00:04:38.248 17:41:25 alias_rpc -- common/autotest_common.sh@978 -- # wait 2773896 00:04:38.507 00:04:38.507 real 0m1.354s 00:04:38.507 user 0m1.500s 00:04:38.507 sys 0m0.352s 00:04:38.507 17:41:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.507 17:41:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.507 ************************************ 00:04:38.507 END TEST alias_rpc 00:04:38.507 ************************************ 00:04:38.507 17:41:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:38.507 17:41:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.507 17:41:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.507 17:41:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.507 17:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:38.507 ************************************ 00:04:38.507 START TEST spdkcli_tcp 00:04:38.507 ************************************ 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.507 * Looking for test storage... 00:04:38.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.507 17:41:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.507 --rc genhtml_branch_coverage=1 00:04:38.507 --rc genhtml_function_coverage=1 00:04:38.507 --rc genhtml_legend=1 00:04:38.507 --rc geninfo_all_blocks=1 00:04:38.507 --rc geninfo_unexecuted_blocks=1 00:04:38.507 00:04:38.507 ' 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.507 --rc genhtml_branch_coverage=1 00:04:38.507 --rc genhtml_function_coverage=1 00:04:38.507 --rc genhtml_legend=1 00:04:38.507 --rc geninfo_all_blocks=1 00:04:38.507 --rc geninfo_unexecuted_blocks=1 00:04:38.507 00:04:38.507 ' 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.507 --rc genhtml_branch_coverage=1 00:04:38.507 --rc genhtml_function_coverage=1 00:04:38.507 --rc genhtml_legend=1 00:04:38.507 --rc geninfo_all_blocks=1 00:04:38.507 --rc geninfo_unexecuted_blocks=1 00:04:38.507 00:04:38.507 ' 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.507 --rc genhtml_branch_coverage=1 00:04:38.507 --rc genhtml_function_coverage=1 00:04:38.507 --rc genhtml_legend=1 00:04:38.507 --rc geninfo_all_blocks=1 00:04:38.507 --rc geninfo_unexecuted_blocks=1 00:04:38.507 00:04:38.507 ' 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:38.507 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.507 17:41:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.766 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2774289 00:04:38.766 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2774289 00:04:38.766 17:41:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:38.766 17:41:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2774289 ']' 00:04:38.766 17:41:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.766 17:41:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.766 17:41:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.766 17:41:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.766 17:41:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.766 [2024-12-06 17:41:26.378493] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:38.766 [2024-12-06 17:41:26.378562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774289 ] 00:04:38.766 [2024-12-06 17:41:26.451114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.766 [2024-12-06 17:41:26.489828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.766 [2024-12-06 17:41:26.489828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.332 17:41:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.332 17:41:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:39.332 17:41:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2774466 00:04:39.332 17:41:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:39.332 17:41:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:39.592 [ 00:04:39.592 "bdev_malloc_delete", 00:04:39.592 "bdev_malloc_create", 00:04:39.592 "bdev_null_resize", 00:04:39.592 "bdev_null_delete", 00:04:39.592 "bdev_null_create", 00:04:39.592 "bdev_nvme_cuse_unregister", 00:04:39.592 "bdev_nvme_cuse_register", 00:04:39.592 "bdev_opal_new_user", 00:04:39.592 "bdev_opal_set_lock_state", 00:04:39.592 "bdev_opal_delete", 00:04:39.592 "bdev_opal_get_info", 00:04:39.592 "bdev_opal_create", 00:04:39.592 "bdev_nvme_opal_revert", 00:04:39.592 "bdev_nvme_opal_init", 00:04:39.592 "bdev_nvme_send_cmd", 00:04:39.592 "bdev_nvme_set_keys", 00:04:39.592 "bdev_nvme_get_path_iostat", 00:04:39.592 "bdev_nvme_get_mdns_discovery_info", 00:04:39.592 "bdev_nvme_stop_mdns_discovery", 00:04:39.592 "bdev_nvme_start_mdns_discovery", 00:04:39.592 "bdev_nvme_set_multipath_policy", 00:04:39.592 "bdev_nvme_set_preferred_path", 00:04:39.592 "bdev_nvme_get_io_paths", 00:04:39.592 "bdev_nvme_remove_error_injection", 00:04:39.592 "bdev_nvme_add_error_injection", 00:04:39.592 "bdev_nvme_get_discovery_info", 00:04:39.592 "bdev_nvme_stop_discovery", 00:04:39.592 "bdev_nvme_start_discovery", 00:04:39.592 "bdev_nvme_get_controller_health_info", 00:04:39.592 "bdev_nvme_disable_controller", 00:04:39.592 "bdev_nvme_enable_controller", 00:04:39.592 "bdev_nvme_reset_controller", 00:04:39.592 "bdev_nvme_get_transport_statistics", 00:04:39.592 "bdev_nvme_apply_firmware", 00:04:39.592 "bdev_nvme_detach_controller", 00:04:39.592 "bdev_nvme_get_controllers", 00:04:39.592 "bdev_nvme_attach_controller", 00:04:39.592 "bdev_nvme_set_hotplug", 00:04:39.592 "bdev_nvme_set_options", 00:04:39.592 "bdev_passthru_delete", 00:04:39.592 "bdev_passthru_create", 00:04:39.592 "bdev_lvol_set_parent_bdev", 00:04:39.592 "bdev_lvol_set_parent", 00:04:39.592 "bdev_lvol_check_shallow_copy", 00:04:39.592 "bdev_lvol_start_shallow_copy", 00:04:39.592 "bdev_lvol_grow_lvstore", 00:04:39.592 "bdev_lvol_get_lvols", 00:04:39.592 "bdev_lvol_get_lvstores", 00:04:39.592 "bdev_lvol_delete", 00:04:39.592 "bdev_lvol_set_read_only", 00:04:39.592 "bdev_lvol_resize", 00:04:39.592 "bdev_lvol_decouple_parent", 00:04:39.592 "bdev_lvol_inflate", 00:04:39.592 "bdev_lvol_rename", 00:04:39.592 "bdev_lvol_clone_bdev", 00:04:39.592 "bdev_lvol_clone", 00:04:39.592 "bdev_lvol_snapshot", 00:04:39.592 "bdev_lvol_create", 00:04:39.592 "bdev_lvol_delete_lvstore", 00:04:39.592 "bdev_lvol_rename_lvstore", 00:04:39.592 "bdev_lvol_create_lvstore", 00:04:39.592 "bdev_raid_set_options", 00:04:39.592 "bdev_raid_remove_base_bdev", 00:04:39.592 "bdev_raid_add_base_bdev", 00:04:39.592 "bdev_raid_delete", 00:04:39.592 "bdev_raid_create", 00:04:39.592 "bdev_raid_get_bdevs", 00:04:39.592 "bdev_error_inject_error", 00:04:39.592 "bdev_error_delete", 00:04:39.592 "bdev_error_create", 00:04:39.592 "bdev_split_delete", 00:04:39.592 "bdev_split_create", 00:04:39.592 "bdev_delay_delete", 00:04:39.592 "bdev_delay_create", 00:04:39.592 "bdev_delay_update_latency", 00:04:39.593 "bdev_zone_block_delete", 00:04:39.593 "bdev_zone_block_create", 00:04:39.593 "blobfs_create", 00:04:39.593 "blobfs_detect", 00:04:39.593 "blobfs_set_cache_size", 00:04:39.593 "bdev_aio_delete", 00:04:39.593 "bdev_aio_rescan", 00:04:39.593 "bdev_aio_create", 00:04:39.593 "bdev_ftl_set_property", 00:04:39.593 "bdev_ftl_get_properties", 00:04:39.593 "bdev_ftl_get_stats", 00:04:39.593 "bdev_ftl_unmap", 00:04:39.593 "bdev_ftl_unload", 00:04:39.593 "bdev_ftl_delete", 00:04:39.593 "bdev_ftl_load", 00:04:39.593 "bdev_ftl_create", 00:04:39.593 "bdev_virtio_attach_controller", 00:04:39.593 "bdev_virtio_scsi_get_devices", 00:04:39.593 "bdev_virtio_detach_controller", 00:04:39.593 "bdev_virtio_blk_set_hotplug", 00:04:39.593 "bdev_iscsi_delete", 00:04:39.593 "bdev_iscsi_create", 00:04:39.593 "bdev_iscsi_set_options", 00:04:39.593 "accel_error_inject_error", 00:04:39.593 "ioat_scan_accel_module", 00:04:39.593 "dsa_scan_accel_module", 00:04:39.593 "iaa_scan_accel_module", 00:04:39.593 "vfu_virtio_create_fs_endpoint", 00:04:39.593 "vfu_virtio_create_scsi_endpoint", 00:04:39.593 "vfu_virtio_scsi_remove_target", 00:04:39.593 "vfu_virtio_scsi_add_target", 00:04:39.593 "vfu_virtio_create_blk_endpoint", 00:04:39.593 "vfu_virtio_delete_endpoint", 00:04:39.593 "keyring_file_remove_key", 00:04:39.593 "keyring_file_add_key", 00:04:39.593 "keyring_linux_set_options", 00:04:39.593 "fsdev_aio_delete", 00:04:39.593 "fsdev_aio_create", 00:04:39.593 "iscsi_get_histogram", 00:04:39.593 "iscsi_enable_histogram", 00:04:39.593 "iscsi_set_options", 00:04:39.593 "iscsi_get_auth_groups", 00:04:39.593 "iscsi_auth_group_remove_secret", 00:04:39.593 "iscsi_auth_group_add_secret", 00:04:39.593 "iscsi_delete_auth_group", 00:04:39.593 "iscsi_create_auth_group", 00:04:39.593 "iscsi_set_discovery_auth", 00:04:39.593 "iscsi_get_options", 00:04:39.593 "iscsi_target_node_request_logout", 00:04:39.593 "iscsi_target_node_set_redirect", 00:04:39.593 "iscsi_target_node_set_auth", 00:04:39.593 "iscsi_target_node_add_lun", 00:04:39.593 "iscsi_get_stats", 00:04:39.593 "iscsi_get_connections", 00:04:39.593 "iscsi_portal_group_set_auth", 00:04:39.593 "iscsi_start_portal_group", 00:04:39.593 "iscsi_delete_portal_group", 00:04:39.593 "iscsi_create_portal_group", 00:04:39.593 "iscsi_get_portal_groups", 00:04:39.593 "iscsi_delete_target_node", 00:04:39.593 "iscsi_target_node_remove_pg_ig_maps", 00:04:39.593 "iscsi_target_node_add_pg_ig_maps", 00:04:39.593 "iscsi_create_target_node", 00:04:39.593 "iscsi_get_target_nodes", 00:04:39.593 "iscsi_delete_initiator_group", 00:04:39.593 "iscsi_initiator_group_remove_initiators", 00:04:39.593 "iscsi_initiator_group_add_initiators", 00:04:39.593 "iscsi_create_initiator_group", 00:04:39.593 "iscsi_get_initiator_groups", 00:04:39.593 "nvmf_set_crdt", 00:04:39.593 "nvmf_set_config", 00:04:39.593 "nvmf_set_max_subsystems", 00:04:39.593 "nvmf_stop_mdns_prr", 00:04:39.593 "nvmf_publish_mdns_prr", 00:04:39.593 "nvmf_subsystem_get_listeners", 00:04:39.593 "nvmf_subsystem_get_qpairs", 00:04:39.593 "nvmf_subsystem_get_controllers", 00:04:39.593 "nvmf_get_stats", 00:04:39.593 "nvmf_get_transports", 00:04:39.593 "nvmf_create_transport", 00:04:39.593 "nvmf_get_targets", 00:04:39.593 "nvmf_delete_target", 00:04:39.593 "nvmf_create_target", 00:04:39.593 "nvmf_subsystem_allow_any_host", 00:04:39.593 "nvmf_subsystem_set_keys", 00:04:39.593 "nvmf_subsystem_remove_host", 00:04:39.593 "nvmf_subsystem_add_host", 00:04:39.593 "nvmf_ns_remove_host", 00:04:39.593 "nvmf_ns_add_host", 00:04:39.593 "nvmf_subsystem_remove_ns", 00:04:39.593 "nvmf_subsystem_set_ns_ana_group", 00:04:39.593 "nvmf_subsystem_add_ns", 00:04:39.593 "nvmf_subsystem_listener_set_ana_state", 00:04:39.593 "nvmf_discovery_get_referrals", 00:04:39.593 "nvmf_discovery_remove_referral", 00:04:39.593 "nvmf_discovery_add_referral", 00:04:39.593 "nvmf_subsystem_remove_listener", 00:04:39.593 "nvmf_subsystem_add_listener", 00:04:39.593 "nvmf_delete_subsystem", 00:04:39.593 "nvmf_create_subsystem", 00:04:39.593 "nvmf_get_subsystems", 00:04:39.593 "env_dpdk_get_mem_stats", 00:04:39.593 "nbd_get_disks", 00:04:39.593 "nbd_stop_disk", 00:04:39.593 "nbd_start_disk", 00:04:39.593 "ublk_recover_disk", 00:04:39.593 "ublk_get_disks", 00:04:39.593 "ublk_stop_disk", 00:04:39.593 "ublk_start_disk", 00:04:39.593 "ublk_destroy_target", 00:04:39.593 "ublk_create_target", 00:04:39.593 "virtio_blk_create_transport", 00:04:39.593 "virtio_blk_get_transports", 00:04:39.593 "vhost_controller_set_coalescing", 00:04:39.593 "vhost_get_controllers", 00:04:39.593 "vhost_delete_controller", 00:04:39.593 "vhost_create_blk_controller", 00:04:39.593 "vhost_scsi_controller_remove_target", 00:04:39.593 "vhost_scsi_controller_add_target", 00:04:39.593 "vhost_start_scsi_controller", 00:04:39.593 "vhost_create_scsi_controller", 00:04:39.593 "thread_set_cpumask", 00:04:39.593 "scheduler_set_options", 00:04:39.593 "framework_get_governor", 00:04:39.593 "framework_get_scheduler", 00:04:39.593 "framework_set_scheduler", 00:04:39.593 "framework_get_reactors", 00:04:39.593 "thread_get_io_channels", 00:04:39.593 "thread_get_pollers", 00:04:39.593 "thread_get_stats", 00:04:39.593 "framework_monitor_context_switch", 00:04:39.593 "spdk_kill_instance", 00:04:39.593 "log_enable_timestamps", 00:04:39.593 "log_get_flags", 00:04:39.593 "log_clear_flag", 00:04:39.593 "log_set_flag", 00:04:39.593 "log_get_level", 00:04:39.593 "log_set_level", 00:04:39.593 "log_get_print_level", 00:04:39.593 "log_set_print_level", 00:04:39.593 "framework_enable_cpumask_locks", 00:04:39.593 "framework_disable_cpumask_locks", 00:04:39.593 "framework_wait_init", 00:04:39.593 "framework_start_init", 00:04:39.593 "scsi_get_devices", 00:04:39.593 "bdev_get_histogram", 00:04:39.593 "bdev_enable_histogram", 00:04:39.593 "bdev_set_qos_limit", 00:04:39.593 "bdev_set_qd_sampling_period", 00:04:39.593 "bdev_get_bdevs", 00:04:39.593 "bdev_reset_iostat", 00:04:39.593 "bdev_get_iostat", 00:04:39.593 "bdev_examine", 00:04:39.593 "bdev_wait_for_examine", 00:04:39.593 "bdev_set_options", 00:04:39.593 "accel_get_stats", 00:04:39.593 "accel_set_options", 00:04:39.593 "accel_set_driver", 00:04:39.593 "accel_crypto_key_destroy", 00:04:39.593 "accel_crypto_keys_get", 00:04:39.593 "accel_crypto_key_create", 00:04:39.593 "accel_assign_opc", 00:04:39.593 "accel_get_module_info", 00:04:39.593 "accel_get_opc_assignments", 00:04:39.593 "vmd_rescan", 00:04:39.593 "vmd_remove_device", 00:04:39.593 "vmd_enable", 00:04:39.593 "sock_get_default_impl", 00:04:39.593 "sock_set_default_impl", 00:04:39.593 "sock_impl_set_options", 00:04:39.593 "sock_impl_get_options", 00:04:39.593 "iobuf_get_stats", 00:04:39.593 "iobuf_set_options", 00:04:39.593 "keyring_get_keys", 00:04:39.593 "vfu_tgt_set_base_path", 00:04:39.594 "framework_get_pci_devices", 00:04:39.594 "framework_get_config", 00:04:39.594 "framework_get_subsystems", 00:04:39.594 "fsdev_set_opts", 00:04:39.594 "fsdev_get_opts", 00:04:39.594 "trace_get_info", 00:04:39.594 "trace_get_tpoint_group_mask", 00:04:39.594 "trace_disable_tpoint_group", 00:04:39.594 "trace_enable_tpoint_group", 00:04:39.594 "trace_clear_tpoint_mask", 00:04:39.594 "trace_set_tpoint_mask", 00:04:39.594 "notify_get_notifications", 00:04:39.594 "notify_get_types", 00:04:39.594 "spdk_get_version", 00:04:39.594 "rpc_get_methods" 00:04:39.594 ] 00:04:39.594 17:41:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.594 17:41:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:39.594 17:41:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2774289 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2774289 ']' 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2774289 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774289 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774289' 00:04:39.594 killing process with pid 2774289 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2774289 00:04:39.594 17:41:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2774289 00:04:39.852 00:04:39.852 real 0m1.373s 00:04:39.852 user 0m2.578s 00:04:39.852 sys 0m0.378s 00:04:39.852 17:41:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.852 17:41:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.852 ************************************ 00:04:39.852 END TEST spdkcli_tcp 00:04:39.852 ************************************ 00:04:39.852 17:41:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.852 17:41:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.852 17:41:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.852 17:41:27 -- common/autotest_common.sh@10 -- # set +x 00:04:39.852 ************************************ 00:04:39.852 START TEST dpdk_mem_utility 00:04:39.852 ************************************ 00:04:39.852 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.852 * Looking for test storage... 00:04:39.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:39.853 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:39.853 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:39.853 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.111 17:41:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.111 --rc genhtml_branch_coverage=1 00:04:40.111 --rc genhtml_function_coverage=1 00:04:40.111 --rc genhtml_legend=1 00:04:40.111 --rc geninfo_all_blocks=1 00:04:40.111 --rc geninfo_unexecuted_blocks=1 00:04:40.111 00:04:40.111 ' 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.111 --rc genhtml_branch_coverage=1 00:04:40.111 --rc genhtml_function_coverage=1 00:04:40.111 --rc genhtml_legend=1 00:04:40.111 --rc geninfo_all_blocks=1 00:04:40.111 --rc geninfo_unexecuted_blocks=1 00:04:40.111 00:04:40.111 ' 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.111 --rc genhtml_branch_coverage=1 00:04:40.111 --rc genhtml_function_coverage=1 00:04:40.111 --rc genhtml_legend=1 00:04:40.111 --rc geninfo_all_blocks=1 00:04:40.111 --rc geninfo_unexecuted_blocks=1 00:04:40.111 00:04:40.111 ' 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.111 --rc genhtml_branch_coverage=1 00:04:40.111 --rc genhtml_function_coverage=1 00:04:40.111 --rc genhtml_legend=1 00:04:40.111 --rc geninfo_all_blocks=1 00:04:40.111 --rc geninfo_unexecuted_blocks=1 00:04:40.111 00:04:40.111 ' 00:04:40.111 17:41:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:40.111 17:41:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2774708 00:04:40.111 17:41:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2774708 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2774708 ']' 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.111 17:41:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.111 17:41:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.111 [2024-12-06 17:41:27.786771] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:40.111 [2024-12-06 17:41:27.786836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774708 ] 00:04:40.111 [2024-12-06 17:41:27.855873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.111 [2024-12-06 17:41:27.891192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.046 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.046 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:41.046 17:41:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:41.046 17:41:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:41.046 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.046 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.046 { 00:04:41.046 "filename": "/tmp/spdk_mem_dump.txt" 00:04:41.046 } 00:04:41.046 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.046 17:41:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.046 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:41.046 1 heaps totaling size 818.000000 MiB 00:04:41.046 size: 818.000000 MiB heap id: 0 00:04:41.046 end heaps---------- 00:04:41.046 9 mempools totaling size 603.782043 MiB 00:04:41.046 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:41.046 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:41.046 size: 100.555481 MiB name: bdev_io_2774708 00:04:41.046 size: 50.003479 MiB name: msgpool_2774708 00:04:41.046 size: 36.509338 MiB name: fsdev_io_2774708 00:04:41.047 size: 21.763794 MiB name: PDU_Pool 00:04:41.047 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:41.047 size: 4.133484 MiB name: evtpool_2774708 00:04:41.047 size: 0.026123 MiB name: Session_Pool 00:04:41.047 end mempools------- 00:04:41.047 6 memzones totaling size 4.142822 MiB 00:04:41.047 size: 1.000366 MiB name: RG_ring_0_2774708 00:04:41.047 size: 1.000366 MiB name: RG_ring_1_2774708 00:04:41.047 size: 1.000366 MiB name: RG_ring_4_2774708 00:04:41.047 size: 1.000366 MiB name: RG_ring_5_2774708 00:04:41.047 size: 0.125366 MiB name: RG_ring_2_2774708 00:04:41.047 size: 0.015991 MiB name: RG_ring_3_2774708 00:04:41.047 end memzones------- 00:04:41.047 17:41:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:41.047 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:41.047 list of free elements. size: 10.852478 MiB 00:04:41.047 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:41.047 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:41.047 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:41.047 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:41.047 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:41.047 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:41.047 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:41.047 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:41.047 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:41.047 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:41.047 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:41.047 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:41.047 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:41.047 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:41.047 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:41.047 list of standard malloc elements. size: 199.218628 MiB 00:04:41.047 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:41.047 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:41.047 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:41.047 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:41.047 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:41.047 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:41.047 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:41.047 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:41.047 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:41.047 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:41.047 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:41.047 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:41.047 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:41.047 list of memzone associated elements. size: 607.928894 MiB 00:04:41.047 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:41.047 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:41.047 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:41.047 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:41.047 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:41.047 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2774708_0 00:04:41.047 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:41.047 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2774708_0 00:04:41.047 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:41.047 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2774708_0 00:04:41.047 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:41.047 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:41.047 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:41.047 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:41.047 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:41.047 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2774708_0 00:04:41.047 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:41.047 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2774708 00:04:41.047 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:41.047 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2774708 00:04:41.047 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:41.047 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:41.047 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:41.047 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:41.047 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:41.047 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:41.047 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:41.047 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:41.047 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:41.047 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2774708 00:04:41.047 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:41.047 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2774708 00:04:41.047 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:41.047 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2774708 00:04:41.047 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:41.047 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2774708 00:04:41.047 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:41.047 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2774708 00:04:41.047 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:41.047 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2774708 00:04:41.047 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:41.047 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:41.047 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:41.047 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:41.047 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:41.047 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:41.047 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:41.047 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2774708 00:04:41.047 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:41.047 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2774708 00:04:41.047 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:41.047 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:41.047 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:41.047 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:41.047 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:41.047 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2774708 00:04:41.047 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:41.047 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:41.047 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:41.047 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2774708 00:04:41.047 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:41.047 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2774708 00:04:41.047 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:41.047 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2774708 00:04:41.047 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:41.047 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:41.047 17:41:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:41.047 17:41:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2774708 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2774708 ']' 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2774708 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2774708 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2774708' 00:04:41.047 killing process with pid 2774708 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2774708 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2774708 00:04:41.047 00:04:41.047 real 0m1.250s 00:04:41.047 user 0m1.331s 00:04:41.047 sys 0m0.333s 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.047 17:41:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:41.047 ************************************ 00:04:41.047 END TEST dpdk_mem_utility 00:04:41.047 ************************************ 00:04:41.306 17:41:28 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:41.306 17:41:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.306 17:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.306 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:41.306 ************************************ 00:04:41.306 START TEST event 00:04:41.306 ************************************ 00:04:41.306 17:41:28 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:41.306 * Looking for test storage... 00:04:41.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:41.306 17:41:28 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.306 17:41:28 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.306 17:41:28 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.306 17:41:29 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.306 17:41:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.306 17:41:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.306 17:41:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.306 17:41:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.306 17:41:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.306 17:41:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.306 17:41:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.306 17:41:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.306 17:41:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.306 17:41:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.306 17:41:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.306 17:41:29 event -- scripts/common.sh@344 -- # case "$op" in 00:04:41.306 17:41:29 event -- scripts/common.sh@345 -- # : 1 00:04:41.306 17:41:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.306 17:41:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.306 17:41:29 event -- scripts/common.sh@365 -- # decimal 1 00:04:41.306 17:41:29 event -- scripts/common.sh@353 -- # local d=1 00:04:41.306 17:41:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.306 17:41:29 event -- scripts/common.sh@355 -- # echo 1 00:04:41.306 17:41:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.306 17:41:29 event -- scripts/common.sh@366 -- # decimal 2 00:04:41.306 17:41:29 event -- scripts/common.sh@353 -- # local d=2 00:04:41.306 17:41:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.306 17:41:29 event -- scripts/common.sh@355 -- # echo 2 00:04:41.306 17:41:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.306 17:41:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.306 17:41:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.306 17:41:29 event -- scripts/common.sh@368 -- # return 0 00:04:41.306 17:41:29 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.306 17:41:29 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.306 --rc genhtml_branch_coverage=1 00:04:41.306 --rc genhtml_function_coverage=1 00:04:41.306 --rc genhtml_legend=1 00:04:41.306 --rc geninfo_all_blocks=1 00:04:41.306 --rc geninfo_unexecuted_blocks=1 00:04:41.306 00:04:41.306 ' 00:04:41.306 17:41:29 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.306 --rc genhtml_branch_coverage=1 00:04:41.306 --rc genhtml_function_coverage=1 00:04:41.306 --rc genhtml_legend=1 00:04:41.306 --rc geninfo_all_blocks=1 00:04:41.306 --rc geninfo_unexecuted_blocks=1 00:04:41.306 00:04:41.306 ' 00:04:41.306 17:41:29 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.306 --rc genhtml_branch_coverage=1 00:04:41.306 --rc genhtml_function_coverage=1 00:04:41.306 --rc genhtml_legend=1 00:04:41.306 --rc geninfo_all_blocks=1 00:04:41.306 --rc geninfo_unexecuted_blocks=1 00:04:41.306 00:04:41.306 ' 00:04:41.306 17:41:29 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.306 --rc genhtml_branch_coverage=1 00:04:41.306 --rc genhtml_function_coverage=1 00:04:41.306 --rc genhtml_legend=1 00:04:41.306 --rc geninfo_all_blocks=1 00:04:41.306 --rc geninfo_unexecuted_blocks=1 00:04:41.306 00:04:41.306 ' 00:04:41.306 17:41:29 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:41.306 17:41:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:41.307 17:41:29 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.307 17:41:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:41.307 17:41:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.307 17:41:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.307 ************************************ 00:04:41.307 START TEST event_perf 00:04:41.307 ************************************ 00:04:41.307 17:41:29 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:41.307 Running I/O for 1 seconds...[2024-12-06 17:41:29.070188] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:41.307 [2024-12-06 17:41:29.070245] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775105 ] 00:04:41.567 [2024-12-06 17:41:29.141207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.567 [2024-12-06 17:41:29.182563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.567 [2024-12-06 17:41:29.182718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.567 [2024-12-06 17:41:29.183268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.567 [2024-12-06 17:41:29.183397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.503 Running I/O for 1 seconds... 00:04:42.503 lcore 0: 186357 00:04:42.503 lcore 1: 186360 00:04:42.503 lcore 2: 186356 00:04:42.503 lcore 3: 186355 00:04:42.503 done. 00:04:42.503 00:04:42.503 real 0m1.152s 00:04:42.503 user 0m4.077s 00:04:42.503 sys 0m0.074s 00:04:42.503 17:41:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.503 17:41:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.503 ************************************ 00:04:42.503 END TEST event_perf 00:04:42.503 ************************************ 00:04:42.503 17:41:30 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.503 17:41:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:42.503 17:41:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.503 17:41:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.503 ************************************ 00:04:42.503 START TEST event_reactor 00:04:42.503 ************************************ 00:04:42.503 17:41:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.503 [2024-12-06 17:41:30.265616] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:42.503 [2024-12-06 17:41:30.265661] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775360 ] 00:04:42.503 [2024-12-06 17:41:30.329782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.762 [2024-12-06 17:41:30.360751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.698 test_start 00:04:43.698 oneshot 00:04:43.698 tick 100 00:04:43.698 tick 100 00:04:43.698 tick 250 00:04:43.698 tick 100 00:04:43.698 tick 100 00:04:43.698 tick 100 00:04:43.698 tick 250 00:04:43.698 tick 500 00:04:43.698 tick 100 00:04:43.698 tick 100 00:04:43.698 tick 250 00:04:43.698 tick 100 00:04:43.698 tick 100 00:04:43.698 test_end 00:04:43.698 00:04:43.698 real 0m1.130s 00:04:43.698 user 0m1.076s 00:04:43.698 sys 0m0.051s 00:04:43.698 17:41:31 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.698 17:41:31 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.698 ************************************ 00:04:43.698 END TEST event_reactor 00:04:43.698 ************************************ 00:04:43.698 17:41:31 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.698 17:41:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:43.698 17:41:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.698 17:41:31 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.698 ************************************ 00:04:43.698 START TEST event_reactor_perf 00:04:43.698 ************************************ 00:04:43.699 17:41:31 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.699 [2024-12-06 17:41:31.441475] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:43.699 [2024-12-06 17:41:31.441521] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775505 ] 00:04:43.699 [2024-12-06 17:41:31.506519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.958 [2024-12-06 17:41:31.536657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.897 test_start 00:04:44.897 test_end 00:04:44.897 Performance: 537794 events per second 00:04:44.897 00:04:44.897 real 0m1.130s 00:04:44.897 user 0m1.072s 00:04:44.897 sys 0m0.055s 00:04:44.897 17:41:32 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.897 17:41:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 END TEST event_reactor_perf 00:04:44.897 ************************************ 00:04:44.897 17:41:32 event -- event/event.sh@49 -- # uname -s 00:04:44.897 17:41:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.897 17:41:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.897 17:41:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.897 17:41:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.897 17:41:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.897 ************************************ 00:04:44.897 START TEST event_scheduler 00:04:44.897 ************************************ 00:04:44.897 17:41:32 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.897 * Looking for test storage... 00:04:44.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:44.897 17:41:32 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.897 17:41:32 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.897 17:41:32 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.897 17:41:32 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.897 17:41:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.898 --rc genhtml_branch_coverage=1 00:04:44.898 --rc genhtml_function_coverage=1 00:04:44.898 --rc genhtml_legend=1 00:04:44.898 --rc geninfo_all_blocks=1 00:04:44.898 --rc geninfo_unexecuted_blocks=1 00:04:44.898 00:04:44.898 ' 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.898 --rc genhtml_branch_coverage=1 00:04:44.898 --rc genhtml_function_coverage=1 00:04:44.898 --rc genhtml_legend=1 00:04:44.898 --rc geninfo_all_blocks=1 00:04:44.898 --rc geninfo_unexecuted_blocks=1 00:04:44.898 00:04:44.898 ' 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.898 --rc genhtml_branch_coverage=1 00:04:44.898 --rc genhtml_function_coverage=1 00:04:44.898 --rc genhtml_legend=1 00:04:44.898 --rc geninfo_all_blocks=1 00:04:44.898 --rc geninfo_unexecuted_blocks=1 00:04:44.898 00:04:44.898 ' 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.898 --rc genhtml_branch_coverage=1 00:04:44.898 --rc genhtml_function_coverage=1 00:04:44.898 --rc genhtml_legend=1 00:04:44.898 --rc geninfo_all_blocks=1 00:04:44.898 --rc geninfo_unexecuted_blocks=1 00:04:44.898 00:04:44.898 ' 00:04:44.898 17:41:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.898 17:41:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2775882 00:04:44.898 17:41:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.898 17:41:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2775882 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2775882 ']' 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.898 17:41:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.898 17:41:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.157 [2024-12-06 17:41:32.747071] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:45.157 [2024-12-06 17:41:32.747131] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2775882 ] 00:04:45.157 [2024-12-06 17:41:32.825381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.157 [2024-12-06 17:41:32.873015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.157 [2024-12-06 17:41:32.873118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.157 [2024-12-06 17:41:32.873290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.157 [2024-12-06 17:41:32.873290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:45.725 17:41:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.725 [2024-12-06 17:41:33.535462] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:45.725 [2024-12-06 17:41:33.535475] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:45.725 [2024-12-06 17:41:33.535482] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:45.725 [2024-12-06 17:41:33.535486] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:45.725 [2024-12-06 17:41:33.535490] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.725 17:41:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.725 17:41:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 [2024-12-06 17:41:33.592484] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.983 17:41:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.983 17:41:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.983 17:41:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 ************************************ 00:04:45.983 START TEST scheduler_create_thread 00:04:45.983 ************************************ 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 2 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 3 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 4 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 5 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 6 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 7 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 8 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 9 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.983 10 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.983 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.984 17:41:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.362 17:41:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.362 00:04:47.362 real 0m1.171s 00:04:47.362 user 0m0.013s 00:04:47.362 sys 0m0.003s 00:04:47.362 17:41:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.362 17:41:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.362 ************************************ 00:04:47.362 END TEST scheduler_create_thread 00:04:47.362 ************************************ 00:04:47.362 17:41:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.362 17:41:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2775882 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2775882 ']' 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2775882 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2775882 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2775882' 00:04:47.362 killing process with pid 2775882 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2775882 00:04:47.362 17:41:34 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2775882 00:04:47.622 [2024-12-06 17:41:35.273631] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:47.622 00:04:47.622 real 0m2.758s 00:04:47.622 user 0m4.950s 00:04:47.622 sys 0m0.300s 00:04:47.622 17:41:35 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.622 17:41:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.622 ************************************ 00:04:47.622 END TEST event_scheduler 00:04:47.622 ************************************ 00:04:47.622 17:41:35 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:47.622 17:41:35 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:47.622 17:41:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.622 17:41:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.622 17:41:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.622 ************************************ 00:04:47.622 START TEST app_repeat 00:04:47.622 ************************************ 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2776596 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2776596' 00:04:47.622 Process app_repeat pid: 2776596 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:47.622 spdk_app_start Round 0 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2776596 /var/tmp/spdk-nbd.sock 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2776596 ']' 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.622 17:41:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.622 17:41:35 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:47.622 [2024-12-06 17:41:35.430045] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:04:47.622 [2024-12-06 17:41:35.430092] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2776596 ] 00:04:47.880 [2024-12-06 17:41:35.495089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.880 [2024-12-06 17:41:35.525327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.880 [2024-12-06 17:41:35.525413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.880 17:41:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.880 17:41:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:47.880 17:41:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.138 Malloc0 00:04:48.138 17:41:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.138 Malloc1 00:04:48.138 17:41:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.138 17:41:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.396 /dev/nbd0 00:04:48.396 17:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.396 17:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.396 17:41:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:48.396 17:41:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.396 17:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.396 17:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.396 17:41:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:48.396 17:41:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.397 1+0 records in 00:04:48.397 1+0 records out 00:04:48.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000131757 s, 31.1 MB/s 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.397 17:41:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.397 17:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.397 17:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.397 17:41:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.656 /dev/nbd1 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.656 1+0 records in 00:04:48.656 1+0 records out 00:04:48.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146656 s, 27.9 MB/s 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:48.656 17:41:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.656 17:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.915 { 00:04:48.915 "nbd_device": "/dev/nbd0", 00:04:48.915 "bdev_name": "Malloc0" 00:04:48.915 }, 00:04:48.915 { 00:04:48.915 "nbd_device": "/dev/nbd1", 00:04:48.915 "bdev_name": "Malloc1" 00:04:48.915 } 00:04:48.915 ]' 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.915 { 00:04:48.915 "nbd_device": "/dev/nbd0", 00:04:48.915 "bdev_name": "Malloc0" 00:04:48.915 }, 00:04:48.915 { 00:04:48.915 "nbd_device": "/dev/nbd1", 00:04:48.915 "bdev_name": "Malloc1" 00:04:48.915 } 00:04:48.915 ]' 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.915 /dev/nbd1' 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.915 /dev/nbd1' 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.915 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.916 256+0 records in 00:04:48.916 256+0 records out 00:04:48.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443992 s, 236 MB/s 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.916 256+0 records in 00:04:48.916 256+0 records out 00:04:48.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115332 s, 90.9 MB/s 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.916 256+0 records in 00:04:48.916 256+0 records out 00:04:48.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187423 s, 55.9 MB/s 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.916 17:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.176 17:41:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.436 17:41:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.436 17:41:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.696 17:41:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.696 [2024-12-06 17:41:37.414902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.696 [2024-12-06 17:41:37.443943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.696 [2024-12-06 17:41:37.443944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.696 [2024-12-06 17:41:37.473217] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.696 [2024-12-06 17:41:37.473248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.987 17:41:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.987 17:41:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.987 spdk_app_start Round 1 00:04:52.987 17:41:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2776596 /var/tmp/spdk-nbd.sock 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2776596 ']' 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.987 17:41:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:52.987 17:41:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.987 Malloc0 00:04:52.987 17:41:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.987 Malloc1 00:04:52.987 17:41:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.987 17:41:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.246 /dev/nbd0 00:04:53.246 17:41:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.246 17:41:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.246 17:41:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:53.246 17:41:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.246 17:41:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.246 17:41:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.246 17:41:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.246 1+0 records in 00:04:53.246 1+0 records out 00:04:53.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214175 s, 19.1 MB/s 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.246 17:41:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.246 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.246 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.246 17:41:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.506 /dev/nbd1 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.506 1+0 records in 00:04:53.506 1+0 records out 00:04:53.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000119516 s, 34.3 MB/s 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.506 17:41:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.506 17:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.766 { 00:04:53.766 "nbd_device": "/dev/nbd0", 00:04:53.766 "bdev_name": "Malloc0" 00:04:53.766 }, 00:04:53.766 { 00:04:53.766 "nbd_device": "/dev/nbd1", 00:04:53.766 "bdev_name": "Malloc1" 00:04:53.766 } 00:04:53.766 ]' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.766 { 00:04:53.766 "nbd_device": "/dev/nbd0", 00:04:53.766 "bdev_name": "Malloc0" 00:04:53.766 }, 00:04:53.766 { 00:04:53.766 "nbd_device": "/dev/nbd1", 00:04:53.766 "bdev_name": "Malloc1" 00:04:53.766 } 00:04:53.766 ]' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.766 /dev/nbd1' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.766 /dev/nbd1' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.766 256+0 records in 00:04:53.766 256+0 records out 00:04:53.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043111 s, 243 MB/s 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.766 256+0 records in 00:04:53.766 256+0 records out 00:04:53.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123054 s, 85.2 MB/s 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.766 256+0 records in 00:04:53.766 256+0 records out 00:04:53.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171214 s, 61.2 MB/s 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.766 17:41:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.026 17:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.284 17:41:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.284 17:41:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.541 17:41:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.541 [2024-12-06 17:41:42.260554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.541 [2024-12-06 17:41:42.289751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.541 [2024-12-06 17:41:42.289755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.541 [2024-12-06 17:41:42.319456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.541 [2024-12-06 17:41:42.319489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.827 17:41:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.827 17:41:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:57.827 spdk_app_start Round 2 00:04:57.827 17:41:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2776596 /var/tmp/spdk-nbd.sock 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2776596 ']' 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.827 17:41:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.827 17:41:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.827 Malloc0 00:04:57.827 17:41:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.827 Malloc1 00:04:58.086 17:41:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.086 /dev/nbd0 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.086 1+0 records in 00:04:58.086 1+0 records out 00:04:58.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000119 s, 34.4 MB/s 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.086 17:41:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.086 17:41:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.345 /dev/nbd1 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.345 1+0 records in 00:04:58.345 1+0 records out 00:04:58.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179794 s, 22.8 MB/s 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.345 17:41:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.345 17:41:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.604 { 00:04:58.604 "nbd_device": "/dev/nbd0", 00:04:58.604 "bdev_name": "Malloc0" 00:04:58.604 }, 00:04:58.604 { 00:04:58.604 "nbd_device": "/dev/nbd1", 00:04:58.604 "bdev_name": "Malloc1" 00:04:58.604 } 00:04:58.604 ]' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.604 { 00:04:58.604 "nbd_device": "/dev/nbd0", 00:04:58.604 "bdev_name": "Malloc0" 00:04:58.604 }, 00:04:58.604 { 00:04:58.604 "nbd_device": "/dev/nbd1", 00:04:58.604 "bdev_name": "Malloc1" 00:04:58.604 } 00:04:58.604 ]' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.604 /dev/nbd1' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.604 /dev/nbd1' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.604 256+0 records in 00:04:58.604 256+0 records out 00:04:58.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474333 s, 221 MB/s 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.604 256+0 records in 00:04:58.604 256+0 records out 00:04:58.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119194 s, 88.0 MB/s 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.604 256+0 records in 00:04:58.604 256+0 records out 00:04:58.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136007 s, 77.1 MB/s 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.604 17:41:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.863 17:41:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.121 17:41:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.121 17:41:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.380 17:41:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.380 [2024-12-06 17:41:47.115317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.380 [2024-12-06 17:41:47.144483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.380 [2024-12-06 17:41:47.144483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.380 [2024-12-06 17:41:47.174141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.380 [2024-12-06 17:41:47.174172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.690 17:41:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2776596 /var/tmp/spdk-nbd.sock 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2776596 ']' 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.690 17:41:50 event.app_repeat -- event/event.sh@39 -- # killprocess 2776596 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2776596 ']' 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2776596 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2776596 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2776596' 00:05:02.690 killing process with pid 2776596 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2776596 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2776596 00:05:02.690 spdk_app_start is called in Round 0. 00:05:02.690 Shutdown signal received, stop current app iteration 00:05:02.690 Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 reinitialization... 00:05:02.690 spdk_app_start is called in Round 1. 00:05:02.690 Shutdown signal received, stop current app iteration 00:05:02.690 Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 reinitialization... 00:05:02.690 spdk_app_start is called in Round 2. 00:05:02.690 Shutdown signal received, stop current app iteration 00:05:02.690 Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 reinitialization... 00:05:02.690 spdk_app_start is called in Round 3. 00:05:02.690 Shutdown signal received, stop current app iteration 00:05:02.690 17:41:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:02.690 17:41:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:02.690 00:05:02.690 real 0m14.901s 00:05:02.690 user 0m32.461s 00:05:02.690 sys 0m1.812s 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.690 17:41:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 END TEST app_repeat 00:05:02.690 ************************************ 00:05:02.690 17:41:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:02.690 17:41:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:02.690 17:41:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.690 17:41:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.690 17:41:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.690 ************************************ 00:05:02.690 START TEST cpu_locks 00:05:02.690 ************************************ 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:02.690 * Looking for test storage... 00:05:02.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.690 17:41:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.690 --rc genhtml_branch_coverage=1 00:05:02.690 --rc genhtml_function_coverage=1 00:05:02.690 --rc genhtml_legend=1 00:05:02.690 --rc geninfo_all_blocks=1 00:05:02.690 --rc geninfo_unexecuted_blocks=1 00:05:02.690 00:05:02.690 ' 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.690 --rc genhtml_branch_coverage=1 00:05:02.690 --rc genhtml_function_coverage=1 00:05:02.690 --rc genhtml_legend=1 00:05:02.690 --rc geninfo_all_blocks=1 00:05:02.690 --rc geninfo_unexecuted_blocks=1 00:05:02.690 00:05:02.690 ' 00:05:02.690 17:41:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.690 --rc genhtml_branch_coverage=1 00:05:02.690 --rc genhtml_function_coverage=1 00:05:02.690 --rc genhtml_legend=1 00:05:02.690 --rc geninfo_all_blocks=1 00:05:02.690 --rc geninfo_unexecuted_blocks=1 00:05:02.690 00:05:02.690 ' 00:05:02.691 17:41:50 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.691 --rc genhtml_branch_coverage=1 00:05:02.691 --rc genhtml_function_coverage=1 00:05:02.691 --rc genhtml_legend=1 00:05:02.691 --rc geninfo_all_blocks=1 00:05:02.691 --rc geninfo_unexecuted_blocks=1 00:05:02.691 00:05:02.691 ' 00:05:02.691 17:41:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:02.691 17:41:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:02.691 17:41:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:02.691 17:41:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:02.691 17:41:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.691 17:41:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.691 17:41:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.691 ************************************ 00:05:02.691 START TEST default_locks 00:05:02.691 ************************************ 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2780172 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2780172 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2780172 ']' 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.691 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.951 [2024-12-06 17:41:50.537115] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:02.951 [2024-12-06 17:41:50.537166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780172 ] 00:05:02.951 [2024-12-06 17:41:50.596069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.951 [2024-12-06 17:41:50.627469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.210 lslocks: write error 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2780172 ']' 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780172' 00:05:03.210 killing process with pid 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2780172 00:05:03.210 17:41:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2780172 00:05:03.470 17:41:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2780172 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2780172 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2780172 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2780172 ']' 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2780172) - No such process 00:05:03.471 ERROR: process (pid: 2780172) is no longer running 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.471 00:05:03.471 real 0m0.665s 00:05:03.471 user 0m0.650s 00:05:03.471 sys 0m0.335s 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.471 17:41:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.471 ************************************ 00:05:03.471 END TEST default_locks 00:05:03.471 ************************************ 00:05:03.471 17:41:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:03.471 17:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.471 17:41:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.471 17:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.471 ************************************ 00:05:03.471 START TEST default_locks_via_rpc 00:05:03.471 ************************************ 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2780355 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2780355 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2780355 ']' 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.471 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.471 [2024-12-06 17:41:51.256975] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:03.471 [2024-12-06 17:41:51.257027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780355 ] 00:05:03.730 [2024-12-06 17:41:51.324187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.730 [2024-12-06 17:41:51.356025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2780355 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2780355 00:05:03.730 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2780355 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2780355 ']' 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2780355 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780355 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780355' 00:05:03.990 killing process with pid 2780355 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2780355 00:05:03.990 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2780355 00:05:04.249 00:05:04.249 real 0m0.756s 00:05:04.249 user 0m0.735s 00:05:04.249 sys 0m0.346s 00:05:04.249 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.249 17:41:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.249 ************************************ 00:05:04.249 END TEST default_locks_via_rpc 00:05:04.249 ************************************ 00:05:04.249 17:41:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:04.249 17:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.249 17:41:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.249 17:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.249 ************************************ 00:05:04.249 START TEST non_locking_app_on_locked_coremask 00:05:04.249 ************************************ 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2780564 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2780564 /var/tmp/spdk.sock 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2780564 ']' 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.249 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.249 [2024-12-06 17:41:52.057409] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:04.249 [2024-12-06 17:41:52.057458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780564 ] 00:05:04.508 [2024-12-06 17:41:52.122522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.508 [2024-12-06 17:41:52.152096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2780570 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2780570 /var/tmp/spdk2.sock 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2780570 ']' 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.508 17:41:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:04.768 [2024-12-06 17:41:52.355572] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:04.768 [2024-12-06 17:41:52.355624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780570 ] 00:05:04.768 [2024-12-06 17:41:52.451923] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.768 [2024-12-06 17:41:52.451950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.768 [2024-12-06 17:41:52.514126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.336 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.336 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.336 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2780564 00:05:05.336 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2780564 00:05:05.336 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.595 lslocks: write error 00:05:05.595 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2780564 00:05:05.595 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2780564 ']' 00:05:05.595 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2780564 00:05:05.595 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780564 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780564' 00:05:05.855 killing process with pid 2780564 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2780564 00:05:05.855 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2780564 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2780570 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2780570 ']' 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2780570 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780570 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780570' 00:05:06.116 killing process with pid 2780570 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2780570 00:05:06.116 17:41:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2780570 00:05:06.376 00:05:06.376 real 0m2.051s 00:05:06.376 user 0m2.198s 00:05:06.376 sys 0m0.700s 00:05:06.376 17:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.376 17:41:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.376 ************************************ 00:05:06.376 END TEST non_locking_app_on_locked_coremask 00:05:06.376 ************************************ 00:05:06.376 17:41:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.376 17:41:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.376 17:41:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.376 17:41:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.376 ************************************ 00:05:06.376 START TEST locking_app_on_unlocked_coremask 00:05:06.376 ************************************ 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2781017 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2781017 /var/tmp/spdk.sock 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2781017 ']' 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.376 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.376 [2024-12-06 17:41:54.159652] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:06.376 [2024-12-06 17:41:54.159708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781017 ] 00:05:06.697 [2024-12-06 17:41:54.225609] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.697 [2024-12-06 17:41:54.225636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.697 [2024-12-06 17:41:54.259337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2781234 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2781234 /var/tmp/spdk2.sock 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2781234 ']' 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.697 17:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:06.697 [2024-12-06 17:41:54.472465] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:06.697 [2024-12-06 17:41:54.472516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781234 ] 00:05:06.997 [2024-12-06 17:41:54.567019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.997 [2024-12-06 17:41:54.629161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.678 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.678 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.678 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2781234 00:05:07.678 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.678 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2781234 00:05:07.937 lslocks: write error 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2781017 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2781017 ']' 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2781017 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2781017 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2781017' 00:05:07.937 killing process with pid 2781017 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2781017 00:05:07.937 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2781017 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2781234 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2781234 ']' 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2781234 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2781234 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2781234' 00:05:08.197 killing process with pid 2781234 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2781234 00:05:08.197 17:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2781234 00:05:08.456 00:05:08.456 real 0m2.055s 00:05:08.456 user 0m2.194s 00:05:08.456 sys 0m0.702s 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.456 ************************************ 00:05:08.456 END TEST locking_app_on_unlocked_coremask 00:05:08.456 ************************************ 00:05:08.456 17:41:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:08.456 17:41:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.456 17:41:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.456 17:41:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.456 ************************************ 00:05:08.456 START TEST locking_app_on_locked_coremask 00:05:08.456 ************************************ 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2781650 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2781650 /var/tmp/spdk.sock 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2781650 ']' 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.456 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.457 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.457 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.457 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.457 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.457 [2024-12-06 17:41:56.260691] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:08.457 [2024-12-06 17:41:56.260740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781650 ] 00:05:08.716 [2024-12-06 17:41:56.325781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.716 [2024-12-06 17:41:56.354464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2781656 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2781656 /var/tmp/spdk2.sock 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2781656 /var/tmp/spdk2.sock 00:05:08.716 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2781656 /var/tmp/spdk2.sock 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2781656 ']' 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.717 17:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.976 [2024-12-06 17:41:56.560940] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:08.977 [2024-12-06 17:41:56.560990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2781656 ] 00:05:08.977 [2024-12-06 17:41:56.658948] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2781650 has claimed it. 00:05:08.977 [2024-12-06 17:41:56.658986] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:09.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2781656) - No such process 00:05:09.545 ERROR: process (pid: 2781656) is no longer running 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.545 lslocks: write error 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2781650 ']' 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2781650' 00:05:09.545 killing process with pid 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2781650 00:05:09.545 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2781650 00:05:09.805 00:05:09.805 real 0m1.329s 00:05:09.805 user 0m1.438s 00:05:09.805 sys 0m0.428s 00:05:09.805 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.805 17:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.805 ************************************ 00:05:09.805 END TEST locking_app_on_locked_coremask 00:05:09.805 ************************************ 00:05:09.805 17:41:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.805 17:41:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.805 17:41:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.805 17:41:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.805 ************************************ 00:05:09.805 START TEST locking_overlapped_coremask 00:05:09.805 ************************************ 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2782014 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2782014 /var/tmp/spdk.sock 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2782014 ']' 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.805 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.064 [2024-12-06 17:41:57.635951] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:10.064 [2024-12-06 17:41:57.636000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782014 ] 00:05:10.064 [2024-12-06 17:41:57.702243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.064 [2024-12-06 17:41:57.731991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.064 [2024-12-06 17:41:57.732146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.064 [2024-12-06 17:41:57.732318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.324 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.324 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.324 17:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2782019 00:05:10.324 17:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2782019 /var/tmp/spdk2.sock 00:05:10.324 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:10.324 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2782019 /var/tmp/spdk2.sock 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2782019 /var/tmp/spdk2.sock 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2782019 ']' 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.325 17:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.325 [2024-12-06 17:41:57.934405] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:10.325 [2024-12-06 17:41:57.934452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782019 ] 00:05:10.325 [2024-12-06 17:41:58.057522] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2782014 has claimed it. 00:05:10.325 [2024-12-06 17:41:58.057564] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2782019) - No such process 00:05:10.901 ERROR: process (pid: 2782019) is no longer running 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2782014 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2782014 ']' 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2782014 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2782014 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2782014' 00:05:10.901 killing process with pid 2782014 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2782014 00:05:10.901 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2782014 00:05:11.161 00:05:11.161 real 0m1.194s 00:05:11.161 user 0m3.323s 00:05:11.161 sys 0m0.321s 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.161 ************************************ 00:05:11.161 END TEST locking_overlapped_coremask 00:05:11.161 ************************************ 00:05:11.161 17:41:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.161 17:41:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.161 17:41:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.161 17:41:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.161 ************************************ 00:05:11.161 START TEST locking_overlapped_coremask_via_rpc 00:05:11.161 ************************************ 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2782281 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2782281 /var/tmp/spdk.sock 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2782281 ']' 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.161 17:41:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.161 [2024-12-06 17:41:58.881581] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:11.161 [2024-12-06 17:41:58.881630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782281 ] 00:05:11.161 [2024-12-06 17:41:58.948158] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.161 [2024-12-06 17:41:58.948188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.161 [2024-12-06 17:41:58.981693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.161 [2024-12-06 17:41:58.981843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.161 [2024-12-06 17:41:58.981845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2782390 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2782390 /var/tmp/spdk2.sock 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2782390 ']' 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.420 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.420 [2024-12-06 17:41:59.191192] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:11.420 [2024-12-06 17:41:59.191241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782390 ] 00:05:11.679 [2024-12-06 17:41:59.288926] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.679 [2024-12-06 17:41:59.288950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.679 [2024-12-06 17:41:59.347852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.679 [2024-12-06 17:41:59.351186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:11.679 [2024-12-06 17:41:59.351187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.247 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.247 [2024-12-06 17:41:59.988167] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2782281 has claimed it. 00:05:12.247 request: 00:05:12.248 { 00:05:12.248 "method": "framework_enable_cpumask_locks", 00:05:12.248 "req_id": 1 00:05:12.248 } 00:05:12.248 Got JSON-RPC error response 00:05:12.248 response: 00:05:12.248 { 00:05:12.248 "code": -32603, 00:05:12.248 "message": "Failed to claim CPU core: 2" 00:05:12.248 } 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2782281 /var/tmp/spdk.sock 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2782281 ']' 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.248 17:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.507 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.507 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.507 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2782390 /var/tmp/spdk2.sock 00:05:12.507 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2782390 ']' 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.508 00:05:12.508 real 0m1.491s 00:05:12.508 user 0m0.672s 00:05:12.508 sys 0m0.101s 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.508 17:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.508 ************************************ 00:05:12.508 END TEST locking_overlapped_coremask_via_rpc 00:05:12.508 ************************************ 00:05:12.767 17:42:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.767 17:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2782281 ]] 00:05:12.767 17:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2782281 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2782281 ']' 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2782281 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2782281 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2782281' 00:05:12.767 killing process with pid 2782281 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2782281 00:05:12.767 17:42:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2782281 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2782390 ]] 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2782390 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2782390 ']' 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2782390 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2782390 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2782390' 00:05:13.026 killing process with pid 2782390 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2782390 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2782390 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2782281 ]] 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2782281 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2782281 ']' 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2782281 00:05:13.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2782281) - No such process 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2782281 is not found' 00:05:13.026 Process with pid 2782281 is not found 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2782390 ]] 00:05:13.026 17:42:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2782390 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2782390 ']' 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2782390 00:05:13.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2782390) - No such process 00:05:13.026 17:42:00 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2782390 is not found' 00:05:13.027 Process with pid 2782390 is not found 00:05:13.027 17:42:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.027 00:05:13.027 real 0m10.485s 00:05:13.027 user 0m19.177s 00:05:13.027 sys 0m3.680s 00:05:13.027 17:42:00 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.027 17:42:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.027 ************************************ 00:05:13.027 END TEST cpu_locks 00:05:13.027 ************************************ 00:05:13.288 00:05:13.288 real 0m31.950s 00:05:13.288 user 1m2.974s 00:05:13.288 sys 0m6.224s 00:05:13.288 17:42:00 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.288 17:42:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.288 ************************************ 00:05:13.288 END TEST event 00:05:13.288 ************************************ 00:05:13.288 17:42:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.288 17:42:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.288 17:42:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.288 17:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:13.288 ************************************ 00:05:13.288 START TEST thread 00:05:13.288 ************************************ 00:05:13.288 17:42:00 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.288 * Looking for test storage... 00:05:13.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:13.288 17:42:00 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:13.288 17:42:00 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:13.288 17:42:00 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.288 17:42:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.288 17:42:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.288 17:42:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.288 17:42:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.288 17:42:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.288 17:42:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.288 17:42:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.288 17:42:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.288 17:42:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.288 17:42:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.288 17:42:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.288 17:42:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:13.288 17:42:01 thread -- scripts/common.sh@345 -- # : 1 00:05:13.288 17:42:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.288 17:42:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.288 17:42:01 thread -- scripts/common.sh@365 -- # decimal 1 00:05:13.288 17:42:01 thread -- scripts/common.sh@353 -- # local d=1 00:05:13.288 17:42:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.288 17:42:01 thread -- scripts/common.sh@355 -- # echo 1 00:05:13.288 17:42:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.288 17:42:01 thread -- scripts/common.sh@366 -- # decimal 2 00:05:13.288 17:42:01 thread -- scripts/common.sh@353 -- # local d=2 00:05:13.288 17:42:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.288 17:42:01 thread -- scripts/common.sh@355 -- # echo 2 00:05:13.288 17:42:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.288 17:42:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.288 17:42:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.288 17:42:01 thread -- scripts/common.sh@368 -- # return 0 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.288 --rc genhtml_branch_coverage=1 00:05:13.288 --rc genhtml_function_coverage=1 00:05:13.288 --rc genhtml_legend=1 00:05:13.288 --rc geninfo_all_blocks=1 00:05:13.288 --rc geninfo_unexecuted_blocks=1 00:05:13.288 00:05:13.288 ' 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.288 --rc genhtml_branch_coverage=1 00:05:13.288 --rc genhtml_function_coverage=1 00:05:13.288 --rc genhtml_legend=1 00:05:13.288 --rc geninfo_all_blocks=1 00:05:13.288 --rc geninfo_unexecuted_blocks=1 00:05:13.288 00:05:13.288 ' 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.288 --rc genhtml_branch_coverage=1 00:05:13.288 --rc genhtml_function_coverage=1 00:05:13.288 --rc genhtml_legend=1 00:05:13.288 --rc geninfo_all_blocks=1 00:05:13.288 --rc geninfo_unexecuted_blocks=1 00:05:13.288 00:05:13.288 ' 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.288 --rc genhtml_branch_coverage=1 00:05:13.288 --rc genhtml_function_coverage=1 00:05:13.288 --rc genhtml_legend=1 00:05:13.288 --rc geninfo_all_blocks=1 00:05:13.288 --rc geninfo_unexecuted_blocks=1 00:05:13.288 00:05:13.288 ' 00:05:13.288 17:42:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.288 17:42:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.288 ************************************ 00:05:13.288 START TEST thread_poller_perf 00:05:13.288 ************************************ 00:05:13.288 17:42:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.288 [2024-12-06 17:42:01.067671] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:13.288 [2024-12-06 17:42:01.067718] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782903 ] 00:05:13.547 [2024-12-06 17:42:01.133183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.547 [2024-12-06 17:42:01.162979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.547 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:14.486 [2024-12-06T16:42:02.313Z] ====================================== 00:05:14.486 [2024-12-06T16:42:02.313Z] busy:2404796280 (cyc) 00:05:14.486 [2024-12-06T16:42:02.313Z] total_run_count: 418000 00:05:14.486 [2024-12-06T16:42:02.313Z] tsc_hz: 2400000000 (cyc) 00:05:14.486 [2024-12-06T16:42:02.313Z] ====================================== 00:05:14.486 [2024-12-06T16:42:02.313Z] poller_cost: 5753 (cyc), 2397 (nsec) 00:05:14.486 00:05:14.486 real 0m1.136s 00:05:14.486 user 0m1.075s 00:05:14.486 sys 0m0.057s 00:05:14.486 17:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.486 17:42:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.486 ************************************ 00:05:14.486 END TEST thread_poller_perf 00:05:14.486 ************************************ 00:05:14.486 17:42:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.486 17:42:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:14.486 17:42:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.486 17:42:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.486 ************************************ 00:05:14.486 START TEST thread_poller_perf 00:05:14.486 ************************************ 00:05:14.486 17:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:14.486 [2024-12-06 17:42:02.248768] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:14.486 [2024-12-06 17:42:02.248815] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783296 ] 00:05:14.747 [2024-12-06 17:42:02.314418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.747 [2024-12-06 17:42:02.344462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.747 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:15.688 [2024-12-06T16:42:03.515Z] ====================================== 00:05:15.688 [2024-12-06T16:42:03.515Z] busy:2401627670 (cyc) 00:05:15.688 [2024-12-06T16:42:03.515Z] total_run_count: 5109000 00:05:15.688 [2024-12-06T16:42:03.515Z] tsc_hz: 2400000000 (cyc) 00:05:15.688 [2024-12-06T16:42:03.515Z] ====================================== 00:05:15.688 [2024-12-06T16:42:03.515Z] poller_cost: 470 (cyc), 195 (nsec) 00:05:15.688 00:05:15.688 real 0m1.133s 00:05:15.688 user 0m1.073s 00:05:15.688 sys 0m0.057s 00:05:15.688 17:42:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.688 17:42:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.688 ************************************ 00:05:15.688 END TEST thread_poller_perf 00:05:15.688 ************************************ 00:05:15.688 17:42:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:15.688 00:05:15.688 real 0m2.480s 00:05:15.688 user 0m2.257s 00:05:15.688 sys 0m0.227s 00:05:15.688 17:42:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.688 17:42:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.688 ************************************ 00:05:15.688 END TEST thread 00:05:15.688 ************************************ 00:05:15.688 17:42:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:15.688 17:42:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:15.688 17:42:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.688 17:42:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.688 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:05:15.688 ************************************ 00:05:15.688 START TEST app_cmdline 00:05:15.688 ************************************ 00:05:15.688 17:42:03 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:15.688 * Looking for test storage... 00:05:15.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:15.688 17:42:03 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.688 17:42:03 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.688 17:42:03 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.948 17:42:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.948 --rc genhtml_branch_coverage=1 00:05:15.948 --rc genhtml_function_coverage=1 00:05:15.948 --rc genhtml_legend=1 00:05:15.948 --rc geninfo_all_blocks=1 00:05:15.948 --rc geninfo_unexecuted_blocks=1 00:05:15.948 00:05:15.948 ' 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.948 --rc genhtml_branch_coverage=1 00:05:15.948 --rc genhtml_function_coverage=1 00:05:15.948 --rc genhtml_legend=1 00:05:15.948 --rc geninfo_all_blocks=1 00:05:15.948 --rc geninfo_unexecuted_blocks=1 00:05:15.948 00:05:15.948 ' 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.948 --rc genhtml_branch_coverage=1 00:05:15.948 --rc genhtml_function_coverage=1 00:05:15.948 --rc genhtml_legend=1 00:05:15.948 --rc geninfo_all_blocks=1 00:05:15.948 --rc geninfo_unexecuted_blocks=1 00:05:15.948 00:05:15.948 ' 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.948 --rc genhtml_branch_coverage=1 00:05:15.948 --rc genhtml_function_coverage=1 00:05:15.948 --rc genhtml_legend=1 00:05:15.948 --rc geninfo_all_blocks=1 00:05:15.948 --rc geninfo_unexecuted_blocks=1 00:05:15.948 00:05:15.948 ' 00:05:15.948 17:42:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:15.948 17:42:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2783691 00:05:15.948 17:42:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2783691 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2783691 ']' 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.948 17:42:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.948 17:42:03 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:15.948 [2024-12-06 17:42:03.605229] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:15.948 [2024-12-06 17:42:03.605286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2783691 ] 00:05:15.948 [2024-12-06 17:42:03.671265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.948 [2024-12-06 17:42:03.702413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.208 17:42:03 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.208 17:42:03 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:16.208 17:42:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:16.208 { 00:05:16.208 "version": "SPDK v25.01-pre git sha1 88dfb58dc", 00:05:16.208 "fields": { 00:05:16.208 "major": 25, 00:05:16.208 "minor": 1, 00:05:16.208 "patch": 0, 00:05:16.208 "suffix": "-pre", 00:05:16.208 "commit": "88dfb58dc" 00:05:16.208 } 00:05:16.208 } 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:16.208 17:42:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:16.208 17:42:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.208 17:42:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.467 17:42:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:16.467 17:42:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:16.467 17:42:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.467 request: 00:05:16.467 { 00:05:16.467 "method": "env_dpdk_get_mem_stats", 00:05:16.467 "req_id": 1 00:05:16.467 } 00:05:16.467 Got JSON-RPC error response 00:05:16.467 response: 00:05:16.467 { 00:05:16.467 "code": -32601, 00:05:16.467 "message": "Method not found" 00:05:16.467 } 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.467 17:42:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2783691 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2783691 ']' 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2783691 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2783691 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2783691' 00:05:16.467 killing process with pid 2783691 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 2783691 00:05:16.467 17:42:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 2783691 00:05:16.727 00:05:16.727 real 0m1.008s 00:05:16.727 user 0m1.192s 00:05:16.727 sys 0m0.349s 00:05:16.727 17:42:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.727 17:42:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.727 ************************************ 00:05:16.727 END TEST app_cmdline 00:05:16.727 ************************************ 00:05:16.727 17:42:04 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:16.727 17:42:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.727 17:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.727 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:16.727 ************************************ 00:05:16.727 START TEST version 00:05:16.727 ************************************ 00:05:16.727 17:42:04 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:16.727 * Looking for test storage... 00:05:16.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:16.727 17:42:04 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:16.727 17:42:04 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:16.727 17:42:04 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:16.988 17:42:04 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:16.988 17:42:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.988 17:42:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.988 17:42:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.988 17:42:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.988 17:42:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.988 17:42:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.988 17:42:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.988 17:42:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.988 17:42:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.988 17:42:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.988 17:42:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.988 17:42:04 version -- scripts/common.sh@344 -- # case "$op" in 00:05:16.988 17:42:04 version -- scripts/common.sh@345 -- # : 1 00:05:16.988 17:42:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.988 17:42:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.988 17:42:04 version -- scripts/common.sh@365 -- # decimal 1 00:05:16.988 17:42:04 version -- scripts/common.sh@353 -- # local d=1 00:05:16.988 17:42:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.988 17:42:04 version -- scripts/common.sh@355 -- # echo 1 00:05:16.988 17:42:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.988 17:42:04 version -- scripts/common.sh@366 -- # decimal 2 00:05:16.988 17:42:04 version -- scripts/common.sh@353 -- # local d=2 00:05:16.988 17:42:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.988 17:42:04 version -- scripts/common.sh@355 -- # echo 2 00:05:16.988 17:42:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.988 17:42:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.988 17:42:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.988 17:42:04 version -- scripts/common.sh@368 -- # return 0 00:05:16.988 17:42:04 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.988 17:42:04 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.988 --rc genhtml_branch_coverage=1 00:05:16.988 --rc genhtml_function_coverage=1 00:05:16.988 --rc genhtml_legend=1 00:05:16.988 --rc geninfo_all_blocks=1 00:05:16.988 --rc geninfo_unexecuted_blocks=1 00:05:16.988 00:05:16.988 ' 00:05:16.988 17:42:04 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.988 --rc genhtml_branch_coverage=1 00:05:16.988 --rc genhtml_function_coverage=1 00:05:16.988 --rc genhtml_legend=1 00:05:16.988 --rc geninfo_all_blocks=1 00:05:16.988 --rc geninfo_unexecuted_blocks=1 00:05:16.988 00:05:16.988 ' 00:05:16.988 17:42:04 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.988 --rc genhtml_branch_coverage=1 00:05:16.988 --rc genhtml_function_coverage=1 00:05:16.988 --rc genhtml_legend=1 00:05:16.988 --rc geninfo_all_blocks=1 00:05:16.988 --rc geninfo_unexecuted_blocks=1 00:05:16.988 00:05:16.988 ' 00:05:16.988 17:42:04 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:16.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.988 --rc genhtml_branch_coverage=1 00:05:16.988 --rc genhtml_function_coverage=1 00:05:16.988 --rc genhtml_legend=1 00:05:16.988 --rc geninfo_all_blocks=1 00:05:16.988 --rc geninfo_unexecuted_blocks=1 00:05:16.988 00:05:16.988 ' 00:05:16.988 17:42:04 version -- app/version.sh@17 -- # get_header_version major 00:05:16.988 17:42:04 version -- app/version.sh@14 -- # cut -f2 00:05:16.988 17:42:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.988 17:42:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.988 17:42:04 version -- app/version.sh@17 -- # major=25 00:05:16.988 17:42:04 version -- app/version.sh@18 -- # get_header_version minor 00:05:16.988 17:42:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.988 17:42:04 version -- app/version.sh@14 -- # cut -f2 00:05:16.988 17:42:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.988 17:42:04 version -- app/version.sh@18 -- # minor=1 00:05:16.988 17:42:04 version -- app/version.sh@19 -- # get_header_version patch 00:05:16.988 17:42:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.989 17:42:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.989 17:42:04 version -- app/version.sh@14 -- # cut -f2 00:05:16.989 17:42:04 version -- app/version.sh@19 -- # patch=0 00:05:16.989 17:42:04 version -- app/version.sh@20 -- # get_header_version suffix 00:05:16.989 17:42:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:16.989 17:42:04 version -- app/version.sh@14 -- # cut -f2 00:05:16.989 17:42:04 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.989 17:42:04 version -- app/version.sh@20 -- # suffix=-pre 00:05:16.989 17:42:04 version -- app/version.sh@22 -- # version=25.1 00:05:16.989 17:42:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:16.989 17:42:04 version -- app/version.sh@28 -- # version=25.1rc0 00:05:16.989 17:42:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:16.989 17:42:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:16.989 17:42:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:16.989 17:42:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:16.989 00:05:16.989 real 0m0.169s 00:05:16.989 user 0m0.100s 00:05:16.989 sys 0m0.091s 00:05:16.989 17:42:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.989 17:42:04 version -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 ************************************ 00:05:16.989 END TEST version 00:05:16.989 ************************************ 00:05:16.989 17:42:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:16.989 17:42:04 -- spdk/autotest.sh@194 -- # uname -s 00:05:16.989 17:42:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:16.989 17:42:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:16.989 17:42:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:16.989 17:42:04 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:16.989 17:42:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.989 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 17:42:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:16.989 17:42:04 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:16.989 17:42:04 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:16.989 17:42:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:16.989 17:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.989 17:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:16.989 ************************************ 00:05:16.989 START TEST nvmf_tcp 00:05:16.989 ************************************ 00:05:16.989 17:42:04 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:16.989 * Looking for test storage... 00:05:16.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:16.989 17:42:04 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.250 17:42:04 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.250 --rc genhtml_branch_coverage=1 00:05:17.250 --rc genhtml_function_coverage=1 00:05:17.250 --rc genhtml_legend=1 00:05:17.250 --rc geninfo_all_blocks=1 00:05:17.250 --rc geninfo_unexecuted_blocks=1 00:05:17.250 00:05:17.250 ' 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.250 --rc genhtml_branch_coverage=1 00:05:17.250 --rc genhtml_function_coverage=1 00:05:17.250 --rc genhtml_legend=1 00:05:17.250 --rc geninfo_all_blocks=1 00:05:17.250 --rc geninfo_unexecuted_blocks=1 00:05:17.250 00:05:17.250 ' 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.250 --rc genhtml_branch_coverage=1 00:05:17.250 --rc genhtml_function_coverage=1 00:05:17.250 --rc genhtml_legend=1 00:05:17.250 --rc geninfo_all_blocks=1 00:05:17.250 --rc geninfo_unexecuted_blocks=1 00:05:17.250 00:05:17.250 ' 00:05:17.250 17:42:04 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.250 --rc genhtml_branch_coverage=1 00:05:17.250 --rc genhtml_function_coverage=1 00:05:17.250 --rc genhtml_legend=1 00:05:17.250 --rc geninfo_all_blocks=1 00:05:17.251 --rc geninfo_unexecuted_blocks=1 00:05:17.251 00:05:17.251 ' 00:05:17.251 17:42:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:17.251 17:42:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:17.251 17:42:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:17.251 17:42:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:17.251 17:42:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.251 17:42:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.251 ************************************ 00:05:17.251 START TEST nvmf_target_core 00:05:17.251 ************************************ 00:05:17.251 17:42:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:17.251 * Looking for test storage... 00:05:17.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:17.251 17:42:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.251 17:42:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.251 17:42:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.251 --rc genhtml_branch_coverage=1 00:05:17.251 --rc genhtml_function_coverage=1 00:05:17.251 --rc genhtml_legend=1 00:05:17.251 --rc geninfo_all_blocks=1 00:05:17.251 --rc geninfo_unexecuted_blocks=1 00:05:17.251 00:05:17.251 ' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.251 --rc genhtml_branch_coverage=1 00:05:17.251 --rc genhtml_function_coverage=1 00:05:17.251 --rc genhtml_legend=1 00:05:17.251 --rc geninfo_all_blocks=1 00:05:17.251 --rc geninfo_unexecuted_blocks=1 00:05:17.251 00:05:17.251 ' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.251 --rc genhtml_branch_coverage=1 00:05:17.251 --rc genhtml_function_coverage=1 00:05:17.251 --rc genhtml_legend=1 00:05:17.251 --rc geninfo_all_blocks=1 00:05:17.251 --rc geninfo_unexecuted_blocks=1 00:05:17.251 00:05:17.251 ' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.251 --rc genhtml_branch_coverage=1 00:05:17.251 --rc genhtml_function_coverage=1 00:05:17.251 --rc genhtml_legend=1 00:05:17.251 --rc geninfo_all_blocks=1 00:05:17.251 --rc geninfo_unexecuted_blocks=1 00:05:17.251 00:05:17.251 ' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.251 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.252 17:42:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:17.510 ************************************ 00:05:17.510 START TEST nvmf_abort 00:05:17.510 ************************************ 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:17.510 * Looking for test storage... 00:05:17.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.510 --rc genhtml_branch_coverage=1 00:05:17.510 --rc genhtml_function_coverage=1 00:05:17.510 --rc genhtml_legend=1 00:05:17.510 --rc geninfo_all_blocks=1 00:05:17.510 --rc geninfo_unexecuted_blocks=1 00:05:17.510 00:05:17.510 ' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.510 --rc genhtml_branch_coverage=1 00:05:17.510 --rc genhtml_function_coverage=1 00:05:17.510 --rc genhtml_legend=1 00:05:17.510 --rc geninfo_all_blocks=1 00:05:17.510 --rc geninfo_unexecuted_blocks=1 00:05:17.510 00:05:17.510 ' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.510 --rc genhtml_branch_coverage=1 00:05:17.510 --rc genhtml_function_coverage=1 00:05:17.510 --rc genhtml_legend=1 00:05:17.510 --rc geninfo_all_blocks=1 00:05:17.510 --rc geninfo_unexecuted_blocks=1 00:05:17.510 00:05:17.510 ' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.510 --rc genhtml_branch_coverage=1 00:05:17.510 --rc genhtml_function_coverage=1 00:05:17.510 --rc genhtml_legend=1 00:05:17.510 --rc geninfo_all_blocks=1 00:05:17.510 --rc geninfo_unexecuted_blocks=1 00:05:17.510 00:05:17.510 ' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.510 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:17.511 17:42:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:24.082 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:24.082 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.082 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:24.083 Found net devices under 0000:31:00.0: cvl_0_0 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:24.083 Found net devices under 0000:31:00.1: cvl_0_1 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:24.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:05:24.083 00:05:24.083 --- 10.0.0.2 ping statistics --- 00:05:24.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.083 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:24.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:05:24.083 00:05:24.083 --- 10.0.0.1 ping statistics --- 00:05:24.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.083 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2788368 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2788368 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2788368 ']' 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:24.083 17:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.083 [2024-12-06 17:42:10.981865] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:24.083 [2024-12-06 17:42:10.981931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.083 [2024-12-06 17:42:11.074522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.083 [2024-12-06 17:42:11.127704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.083 [2024-12-06 17:42:11.127755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.083 [2024-12-06 17:42:11.127764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.083 [2024-12-06 17:42:11.127772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.083 [2024-12-06 17:42:11.127779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.083 [2024-12-06 17:42:11.129700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.083 [2024-12-06 17:42:11.129863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.083 [2024-12-06 17:42:11.129864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.083 [2024-12-06 17:42:11.826393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:24.083 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.084 Malloc0 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.084 Delay0 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.084 [2024-12-06 17:42:11.896695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.084 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:24.343 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.343 17:42:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:24.343 [2024-12-06 17:42:12.014843] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:26.879 Initializing NVMe Controllers 00:05:26.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:26.879 controller IO queue size 128 less than required 00:05:26.879 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:26.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:26.879 Initialization complete. Launching workers. 00:05:26.879 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 28484 00:05:26.879 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28547, failed to submit 62 00:05:26.879 success 28488, unsuccessful 59, failed 0 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:26.879 rmmod nvme_tcp 00:05:26.879 rmmod nvme_fabrics 00:05:26.879 rmmod nvme_keyring 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2788368 ']' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2788368 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2788368 ']' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2788368 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2788368 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2788368' 00:05:26.879 killing process with pid 2788368 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2788368 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2788368 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:26.879 17:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:28.781 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:28.782 00:05:28.782 real 0m11.338s 00:05:28.782 user 0m13.057s 00:05:28.782 sys 0m5.220s 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.782 ************************************ 00:05:28.782 END TEST nvmf_abort 00:05:28.782 ************************************ 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:28.782 ************************************ 00:05:28.782 START TEST nvmf_ns_hotplug_stress 00:05:28.782 ************************************ 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:28.782 * Looking for test storage... 00:05:28.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.782 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.041 --rc genhtml_branch_coverage=1 00:05:29.041 --rc genhtml_function_coverage=1 00:05:29.041 --rc genhtml_legend=1 00:05:29.041 --rc geninfo_all_blocks=1 00:05:29.041 --rc geninfo_unexecuted_blocks=1 00:05:29.041 00:05:29.041 ' 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.041 --rc genhtml_branch_coverage=1 00:05:29.041 --rc genhtml_function_coverage=1 00:05:29.041 --rc genhtml_legend=1 00:05:29.041 --rc geninfo_all_blocks=1 00:05:29.041 --rc geninfo_unexecuted_blocks=1 00:05:29.041 00:05:29.041 ' 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.041 --rc genhtml_branch_coverage=1 00:05:29.041 --rc genhtml_function_coverage=1 00:05:29.041 --rc genhtml_legend=1 00:05:29.041 --rc geninfo_all_blocks=1 00:05:29.041 --rc geninfo_unexecuted_blocks=1 00:05:29.041 00:05:29.041 ' 00:05:29.041 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.041 --rc genhtml_branch_coverage=1 00:05:29.041 --rc genhtml_function_coverage=1 00:05:29.041 --rc genhtml_legend=1 00:05:29.041 --rc geninfo_all_blocks=1 00:05:29.041 --rc geninfo_unexecuted_blocks=1 00:05:29.041 00:05:29.041 ' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:29.042 17:42:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:34.318 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:34.318 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.318 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:34.319 Found net devices under 0000:31:00.0: cvl_0_0 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:34.319 Found net devices under 0000:31:00.1: cvl_0_1 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.319 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:05:34.579 00:05:34.579 --- 10.0.0.2 ping statistics --- 00:05:34.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.579 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:05:34.579 00:05:34.579 --- 10.0.0.1 ping statistics --- 00:05:34.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.579 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.579 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2793682 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2793682 00:05:34.838 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2793682 ']' 00:05:34.839 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.839 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.839 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.839 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.839 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.839 17:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.839 [2024-12-06 17:42:22.470927] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:05:34.839 [2024-12-06 17:42:22.470991] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.839 [2024-12-06 17:42:22.550307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.839 [2024-12-06 17:42:22.587068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.839 [2024-12-06 17:42:22.587112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.839 [2024-12-06 17:42:22.587118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.839 [2024-12-06 17:42:22.587123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.839 [2024-12-06 17:42:22.587127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.839 [2024-12-06 17:42:22.588469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.839 [2024-12-06 17:42:22.588624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.839 [2024-12-06 17:42:22.588626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:35.778 [2024-12-06 17:42:23.417832] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:35.778 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:36.037 [2024-12-06 17:42:23.738929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:36.037 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:36.296 17:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:36.296 Malloc0 00:05:36.296 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:36.555 Delay0 00:05:36.555 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.814 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:36.814 NULL1 00:05:36.814 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:37.072 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2794085 00:05:37.072 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:37.072 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.072 17:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:38.448 Read completed with error (sct=0, sc=11) 00:05:38.448 17:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.448 17:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:38.448 17:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:38.448 true 00:05:38.448 17:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:38.448 17:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.385 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.385 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:39.385 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:39.643 true 00:05:39.643 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:39.643 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.903 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.903 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:39.903 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:40.162 true 00:05:40.162 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:40.162 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.421 17:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.421 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:40.421 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:40.680 true 00:05:40.680 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:40.680 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.680 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.939 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:40.939 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:40.939 true 00:05:40.939 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:40.939 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.199 17:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.458 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:41.458 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:41.458 true 00:05:41.458 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:41.458 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.719 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.977 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:41.977 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:41.977 true 00:05:41.977 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:41.977 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.235 17:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.236 17:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:42.236 17:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:42.494 true 00:05:42.494 17:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:42.494 17:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.433 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.433 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:43.433 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:43.697 true 00:05:43.697 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:43.697 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.697 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.022 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:44.022 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:44.022 true 00:05:44.022 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:44.022 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.309 17:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.570 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:44.570 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:44.570 true 00:05:44.570 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:44.570 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.829 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.829 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:44.830 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:45.088 true 00:05:45.088 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:45.088 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.347 17:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.347 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:45.347 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:45.606 true 00:05:45.606 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:45.606 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.606 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.864 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:45.864 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:45.864 true 00:05:46.124 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:46.124 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.124 17:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.383 17:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:46.383 17:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:46.384 true 00:05:46.384 17:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:46.384 17:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.762 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.762 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:47.762 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:47.762 true 00:05:47.762 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:47.762 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.021 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.021 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:48.021 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:48.281 true 00:05:48.281 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:48.281 17:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.540 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.540 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:48.540 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:48.800 true 00:05:48.800 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:48.800 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.800 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.059 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:49.059 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:49.317 true 00:05:49.317 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:49.317 17:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.317 17:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.574 17:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:49.575 17:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:49.575 true 00:05:49.575 17:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:49.575 17:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.950 17:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.950 17:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:50.950 17:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:50.950 true 00:05:50.950 17:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:50.950 17:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.208 17:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.208 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:51.208 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:51.466 true 00:05:51.466 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:51.466 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.724 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.724 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:51.724 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:51.982 true 00:05:51.982 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:51.982 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.982 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.265 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:52.265 17:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:52.265 true 00:05:52.523 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:52.523 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.524 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.781 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:52.781 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:52.781 true 00:05:52.781 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:52.781 17:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.717 17:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.975 17:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:53.975 17:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:53.975 true 00:05:53.975 17:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:53.975 17:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.233 17:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.492 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:54.492 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:54.492 true 00:05:54.492 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:54.492 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.751 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.751 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:54.751 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:55.009 true 00:05:55.009 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:55.009 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.269 17:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.269 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:55.269 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:55.528 true 00:05:55.528 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:55.528 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.528 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.787 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:55.787 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:56.047 true 00:05:56.047 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:56.047 17:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.985 17:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.985 17:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:56.985 17:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:57.244 true 00:05:57.244 17:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:57.244 17:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.502 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.502 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:57.502 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:57.760 true 00:05:57.760 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:57.760 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.760 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.019 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:58.019 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:58.277 true 00:05:58.277 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:58.277 17:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.277 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.535 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:58.535 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:58.535 true 00:05:58.794 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:58.794 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.794 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.053 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:59.053 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:59.053 true 00:05:59.053 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:05:59.053 17:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.991 17:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.250 17:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:00.250 17:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:00.250 true 00:06:00.250 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:00.250 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.509 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.769 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:00.769 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:00.769 true 00:06:00.769 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:00.769 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.029 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.029 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:01.029 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:01.289 true 00:06:01.289 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:01.289 17:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.548 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.548 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:01.548 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:01.807 true 00:06:01.807 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:01.807 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.807 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.066 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:02.066 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:02.325 true 00:06:02.325 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:02.325 17:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.261 17:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.261 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:03.261 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:03.521 true 00:06:03.521 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:03.521 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.780 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.780 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:03.780 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:04.040 true 00:06:04.040 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:04.040 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.299 17:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.299 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:04.299 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:04.558 true 00:06:04.558 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:04.558 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.558 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.817 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:04.817 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:04.817 true 00:06:05.077 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:05.077 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.077 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.338 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:05.338 17:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:05.338 true 00:06:05.338 17:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:05.338 17:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.272 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.530 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:06.530 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:06.788 true 00:06:06.788 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:06.789 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.789 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.047 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:07.047 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:07.047 true 00:06:07.047 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:07.047 17:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.306 Initializing NVMe Controllers 00:06:07.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.306 Controller IO queue size 128, less than required. 00:06:07.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.306 Controller IO queue size 128, less than required. 00:06:07.306 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:07.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:07.306 Initialization complete. Launching workers. 00:06:07.306 ======================================================== 00:06:07.306 Latency(us) 00:06:07.306 Device Information : IOPS MiB/s Average min max 00:06:07.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 362.08 0.18 126096.64 2233.02 1032392.01 00:06:07.306 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11062.91 5.40 11569.80 1126.75 400631.79 00:06:07.306 ======================================================== 00:06:07.306 Total : 11424.99 5.58 15199.39 1126.75 1032392.01 00:06:07.306 00:06:07.306 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.564 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:07.564 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:07.564 true 00:06:07.564 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2794085 00:06:07.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2794085) - No such process 00:06:07.564 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2794085 00:06:07.564 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.822 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.822 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:07.822 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:07.822 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:07.822 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:07.822 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:08.080 null0 00:06:08.081 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.081 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.081 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:08.339 null1 00:06:08.339 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.339 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.339 17:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:08.339 null2 00:06:08.339 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.339 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.339 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:08.597 null3 00:06:08.597 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.597 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.597 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:08.597 null4 00:06:08.597 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.597 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.597 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:08.854 null5 00:06:08.854 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.854 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.854 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:09.113 null6 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:09.113 null7 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2801540 2801541 2801543 2801544 2801545 2801547 2801550 2801553 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:09.113 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.114 17:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.372 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:09.631 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:09.891 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.150 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.409 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.409 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.409 17:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.409 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:10.668 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:10.927 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.927 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.927 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.927 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.927 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:10.928 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.187 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.188 17:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.446 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.706 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.965 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.224 17:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.224 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.482 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.483 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:12.743 rmmod nvme_tcp 00:06:12.743 rmmod nvme_fabrics 00:06:12.743 rmmod nvme_keyring 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2793682 ']' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2793682 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2793682 ']' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2793682 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793682 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793682' 00:06:12.743 killing process with pid 2793682 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2793682 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2793682 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.743 17:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:15.282 00:06:15.282 real 0m46.106s 00:06:15.282 user 3m7.366s 00:06:15.282 sys 0m13.477s 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.282 ************************************ 00:06:15.282 END TEST nvmf_ns_hotplug_stress 00:06:15.282 ************************************ 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.282 ************************************ 00:06:15.282 START TEST nvmf_delete_subsystem 00:06:15.282 ************************************ 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:15.282 * Looking for test storage... 00:06:15.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.282 --rc genhtml_branch_coverage=1 00:06:15.282 --rc genhtml_function_coverage=1 00:06:15.282 --rc genhtml_legend=1 00:06:15.282 --rc geninfo_all_blocks=1 00:06:15.282 --rc geninfo_unexecuted_blocks=1 00:06:15.282 00:06:15.282 ' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.282 --rc genhtml_branch_coverage=1 00:06:15.282 --rc genhtml_function_coverage=1 00:06:15.282 --rc genhtml_legend=1 00:06:15.282 --rc geninfo_all_blocks=1 00:06:15.282 --rc geninfo_unexecuted_blocks=1 00:06:15.282 00:06:15.282 ' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.282 --rc genhtml_branch_coverage=1 00:06:15.282 --rc genhtml_function_coverage=1 00:06:15.282 --rc genhtml_legend=1 00:06:15.282 --rc geninfo_all_blocks=1 00:06:15.282 --rc geninfo_unexecuted_blocks=1 00:06:15.282 00:06:15.282 ' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.282 --rc genhtml_branch_coverage=1 00:06:15.282 --rc genhtml_function_coverage=1 00:06:15.282 --rc genhtml_legend=1 00:06:15.282 --rc geninfo_all_blocks=1 00:06:15.282 --rc geninfo_unexecuted_blocks=1 00:06:15.282 00:06:15.282 ' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.282 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.283 17:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:20.553 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:20.554 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:20.554 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:20.554 Found net devices under 0000:31:00.0: cvl_0_0 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:20.554 Found net devices under 0000:31:00.1: cvl_0_1 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:20.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:20.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:06:20.554 00:06:20.554 --- 10.0.0.2 ping statistics --- 00:06:20.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.554 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:20.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:20.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:20.554 00:06:20.554 --- 10.0.0.1 ping statistics --- 00:06:20.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:20.554 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2806842 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2806842 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2806842 ']' 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.554 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.555 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.555 17:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:20.813 [2024-12-06 17:43:08.415458] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:06:20.813 [2024-12-06 17:43:08.415518] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.814 [2024-12-06 17:43:08.502873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.814 [2024-12-06 17:43:08.538457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:20.814 [2024-12-06 17:43:08.538489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:20.814 [2024-12-06 17:43:08.538497] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.814 [2024-12-06 17:43:08.538504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.814 [2024-12-06 17:43:08.538510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:20.814 [2024-12-06 17:43:08.539667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.814 [2024-12-06 17:43:08.539672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.750 [2024-12-06 17:43:09.247342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.750 [2024-12-06 17:43:09.263589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:21.750 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.751 NULL1 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.751 Delay0 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2807103 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:21.751 17:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:21.751 [2024-12-06 17:43:09.348507] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:23.651 17:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:23.651 17:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.652 17:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 [2024-12-06 17:43:11.479344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff460000c40 is same with the state(6) to be set 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 starting I/O failed: -6 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 [2024-12-06 17:43:11.479845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeaf00 is same with the state(6) to be set 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Write completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.911 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:23.912 Write completed with error (sct=0, sc=8) 00:06:23.912 Read completed with error (sct=0, sc=8) 00:06:24.849 [2024-12-06 17:43:12.451386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeec5f0 is same with the state(6) to be set 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 [2024-12-06 17:43:12.480782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff46000d680 is same with the state(6) to be set 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 [2024-12-06 17:43:12.480896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff46000d020 is same with the state(6) to be set 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 [2024-12-06 17:43:12.481673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeb4a0 is same with the state(6) to be set 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Write completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 Read completed with error (sct=0, sc=8) 00:06:24.849 [2024-12-06 17:43:12.481759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeb0e0 is same with the state(6) to be set 00:06:24.849 Initializing NVMe Controllers 00:06:24.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:24.849 Controller IO queue size 128, less than required. 00:06:24.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:24.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:24.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:24.849 Initialization complete. Launching workers. 00:06:24.849 ======================================================== 00:06:24.849 Latency(us) 00:06:24.849 Device Information : IOPS MiB/s Average min max 00:06:24.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.87 0.08 949458.94 227.42 2002061.47 00:06:24.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.78 0.08 915421.97 284.58 2001274.58 00:06:24.849 ======================================================== 00:06:24.849 Total : 330.65 0.16 931776.07 227.42 2002061.47 00:06:24.849 00:06:24.849 [2024-12-06 17:43:12.482352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeec5f0 (9): Bad file descriptor 00:06:24.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:24.849 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.849 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:24.849 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2807103 00:06:24.850 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2807103 00:06:25.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2807103) - No such process 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2807103 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2807103 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2807103 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.417 17:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.417 [2024-12-06 17:43:13.003479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2807890 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.417 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:25.417 [2024-12-06 17:43:13.059407] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:25.985 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:25.985 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:25.985 17:43:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.243 17:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.243 17:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:26.243 17:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.811 17:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.811 17:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:26.811 17:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.377 17:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.377 17:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:27.377 17:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.942 17:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.942 17:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:27.942 17:43:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.509 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.509 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:28.509 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.509 Initializing NVMe Controllers 00:06:28.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:28.509 Controller IO queue size 128, less than required. 00:06:28.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:28.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:28.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:28.509 Initialization complete. Launching workers. 00:06:28.509 ======================================================== 00:06:28.509 Latency(us) 00:06:28.509 Device Information : IOPS MiB/s Average min max 00:06:28.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002815.25 1000272.48 1007503.63 00:06:28.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001993.95 1000169.43 1041836.66 00:06:28.509 ======================================================== 00:06:28.509 Total : 256.00 0.12 1002404.60 1000169.43 1041836.66 00:06:28.509 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2807890 00:06:28.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2807890) - No such process 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2807890 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:28.767 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.768 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:28.768 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.768 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.768 rmmod nvme_tcp 00:06:28.768 rmmod nvme_fabrics 00:06:28.768 rmmod nvme_keyring 00:06:28.768 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.768 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:28.768 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2806842 ']' 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2806842 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2806842 ']' 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2806842 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806842 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806842' 00:06:29.027 killing process with pid 2806842 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2806842 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2806842 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.027 17:43:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.567 00:06:31.567 real 0m16.161s 00:06:31.567 user 0m29.743s 00:06:31.567 sys 0m5.287s 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.567 ************************************ 00:06:31.567 END TEST nvmf_delete_subsystem 00:06:31.567 ************************************ 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.567 ************************************ 00:06:31.567 START TEST nvmf_host_management 00:06:31.567 ************************************ 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.567 * Looking for test storage... 00:06:31.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.567 --rc genhtml_branch_coverage=1 00:06:31.567 --rc genhtml_function_coverage=1 00:06:31.567 --rc genhtml_legend=1 00:06:31.567 --rc geninfo_all_blocks=1 00:06:31.567 --rc geninfo_unexecuted_blocks=1 00:06:31.567 00:06:31.567 ' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.567 --rc genhtml_branch_coverage=1 00:06:31.567 --rc genhtml_function_coverage=1 00:06:31.567 --rc genhtml_legend=1 00:06:31.567 --rc geninfo_all_blocks=1 00:06:31.567 --rc geninfo_unexecuted_blocks=1 00:06:31.567 00:06:31.567 ' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.567 --rc genhtml_branch_coverage=1 00:06:31.567 --rc genhtml_function_coverage=1 00:06:31.567 --rc genhtml_legend=1 00:06:31.567 --rc geninfo_all_blocks=1 00:06:31.567 --rc geninfo_unexecuted_blocks=1 00:06:31.567 00:06:31.567 ' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.567 --rc genhtml_branch_coverage=1 00:06:31.567 --rc genhtml_function_coverage=1 00:06:31.567 --rc genhtml_legend=1 00:06:31.567 --rc geninfo_all_blocks=1 00:06:31.567 --rc geninfo_unexecuted_blocks=1 00:06:31.567 00:06:31.567 ' 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.567 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.568 17:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.568 17:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.987 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:36.988 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:36.988 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:36.988 Found net devices under 0000:31:00.0: cvl_0_0 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:36.988 Found net devices under 0000:31:00.1: cvl_0_1 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:06:36.988 00:06:36.988 --- 10.0.0.2 ping statistics --- 00:06:36.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.988 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:06:36.988 00:06:36.988 --- 10.0.0.1 ping statistics --- 00:06:36.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.988 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.988 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2813139 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2813139 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2813139 ']' 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.989 17:43:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:36.989 [2024-12-06 17:43:24.694660] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:06:36.989 [2024-12-06 17:43:24.694726] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.989 [2024-12-06 17:43:24.773421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.989 [2024-12-06 17:43:24.811147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.989 [2024-12-06 17:43:24.811183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.989 [2024-12-06 17:43:24.811191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.989 [2024-12-06 17:43:24.811196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.989 [2024-12-06 17:43:24.811200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.989 [2024-12-06 17:43:24.812689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.989 [2024-12-06 17:43:24.812847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.989 [2024-12-06 17:43:24.813002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.989 [2024-12-06 17:43:24.813004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:37.924 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.924 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:37.924 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.924 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 [2024-12-06 17:43:25.513805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 Malloc0 00:06:37.925 [2024-12-06 17:43:25.579843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2813486 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2813486 /var/tmp/bdevperf.sock 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2813486 ']' 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:37.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:37.925 { 00:06:37.925 "params": { 00:06:37.925 "name": "Nvme$subsystem", 00:06:37.925 "trtype": "$TEST_TRANSPORT", 00:06:37.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:37.925 "adrfam": "ipv4", 00:06:37.925 "trsvcid": "$NVMF_PORT", 00:06:37.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:37.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:37.925 "hdgst": ${hdgst:-false}, 00:06:37.925 "ddgst": ${ddgst:-false} 00:06:37.925 }, 00:06:37.925 "method": "bdev_nvme_attach_controller" 00:06:37.925 } 00:06:37.925 EOF 00:06:37.925 )") 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:37.925 17:43:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:37.925 "params": { 00:06:37.925 "name": "Nvme0", 00:06:37.925 "trtype": "tcp", 00:06:37.925 "traddr": "10.0.0.2", 00:06:37.925 "adrfam": "ipv4", 00:06:37.925 "trsvcid": "4420", 00:06:37.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:37.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:37.925 "hdgst": false, 00:06:37.925 "ddgst": false 00:06:37.925 }, 00:06:37.925 "method": "bdev_nvme_attach_controller" 00:06:37.925 }' 00:06:37.925 [2024-12-06 17:43:25.651287] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:06:37.925 [2024-12-06 17:43:25.651340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813486 ] 00:06:37.925 [2024-12-06 17:43:25.729646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.184 [2024-12-06 17:43:25.765896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.184 Running I/O for 10 seconds... 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.755 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.755 [2024-12-06 17:43:26.506780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.755 [2024-12-06 17:43:26.506816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.506993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037d40 is same with the state(6) to be set 00:06:38.756 [2024-12-06 17:43:26.507361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.756 [2024-12-06 17:43:26.507744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.756 [2024-12-06 17:43:26.507753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.507989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.507996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.757 [2024-12-06 17:43:26.508396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.757 [2024-12-06 17:43:26.508405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.758 [2024-12-06 17:43:26.508413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.758 [2024-12-06 17:43:26.508422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.758 [2024-12-06 17:43:26.508429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.758 [2024-12-06 17:43:26.508438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.758 [2024-12-06 17:43:26.508445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.758 [2024-12-06 17:43:26.508456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:38.758 [2024-12-06 17:43:26.508463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:38.758 [2024-12-06 17:43:26.508488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:38.758 [2024-12-06 17:43:26.509698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:38.758 task offset: 106496 on job bdev=Nvme0n1 fails 00:06:38.758 00:06:38.758 Latency(us) 00:06:38.758 [2024-12-06T16:43:26.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.758 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:38.758 Job: Nvme0n1 ended in about 0.55 seconds with error 00:06:38.758 Verification LBA range: start 0x0 length 0x400 00:06:38.758 Nvme0n1 : 0.55 1504.62 94.04 115.74 0.00 38512.41 2102.61 32549.55 00:06:38.758 [2024-12-06T16:43:26.585Z] =================================================================================================================== 00:06:38.758 [2024-12-06T16:43:26.585Z] Total : 1504.62 94.04 115.74 0.00 38512.41 2102.61 32549.55 00:06:38.758 [2024-12-06 17:43:26.511703] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.758 [2024-12-06 17:43:26.511725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203fb10 (9): Bad file descriptor 00:06:38.758 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.758 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:38.758 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.758 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.758 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.758 17:43:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:38.758 [2024-12-06 17:43:26.526784] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:39.696 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2813486 00:06:39.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2813486) - No such process 00:06:39.696 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:39.696 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:39.955 { 00:06:39.955 "params": { 00:06:39.955 "name": "Nvme$subsystem", 00:06:39.955 "trtype": "$TEST_TRANSPORT", 00:06:39.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.955 "adrfam": "ipv4", 00:06:39.955 "trsvcid": "$NVMF_PORT", 00:06:39.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.955 "hdgst": ${hdgst:-false}, 00:06:39.955 "ddgst": ${ddgst:-false} 00:06:39.955 }, 00:06:39.955 "method": "bdev_nvme_attach_controller" 00:06:39.955 } 00:06:39.955 EOF 00:06:39.955 )") 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:39.955 17:43:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:39.955 "params": { 00:06:39.955 "name": "Nvme0", 00:06:39.955 "trtype": "tcp", 00:06:39.955 "traddr": "10.0.0.2", 00:06:39.955 "adrfam": "ipv4", 00:06:39.955 "trsvcid": "4420", 00:06:39.955 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.955 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.955 "hdgst": false, 00:06:39.955 "ddgst": false 00:06:39.955 }, 00:06:39.955 "method": "bdev_nvme_attach_controller" 00:06:39.955 }' 00:06:39.955 [2024-12-06 17:43:27.556794] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:06:39.955 [2024-12-06 17:43:27.556851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2813864 ] 00:06:39.955 [2024-12-06 17:43:27.636068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.955 [2024-12-06 17:43:27.671244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.212 Running I/O for 1 seconds... 00:06:41.587 1726.00 IOPS, 107.88 MiB/s 00:06:41.587 Latency(us) 00:06:41.587 [2024-12-06T16:43:29.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.587 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:41.587 Verification LBA range: start 0x0 length 0x400 00:06:41.587 Nvme0n1 : 1.02 1764.20 110.26 0.00 0.00 35628.96 4396.37 31457.28 00:06:41.587 [2024-12-06T16:43:29.414Z] =================================================================================================================== 00:06:41.587 [2024-12-06T16:43:29.415Z] Total : 1764.20 110.26 0.00 0.00 35628.96 4396.37 31457.28 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:41.588 rmmod nvme_tcp 00:06:41.588 rmmod nvme_fabrics 00:06:41.588 rmmod nvme_keyring 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2813139 ']' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2813139 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2813139 ']' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2813139 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2813139 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2813139' 00:06:41.588 killing process with pid 2813139 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2813139 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2813139 00:06:41.588 [2024-12-06 17:43:29.316837] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.588 17:43:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:44.124 00:06:44.124 real 0m12.535s 00:06:44.124 user 0m22.041s 00:06:44.124 sys 0m5.163s 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:44.124 ************************************ 00:06:44.124 END TEST nvmf_host_management 00:06:44.124 ************************************ 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.124 ************************************ 00:06:44.124 START TEST nvmf_lvol 00:06:44.124 ************************************ 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:44.124 * Looking for test storage... 00:06:44.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.124 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.125 --rc genhtml_branch_coverage=1 00:06:44.125 --rc genhtml_function_coverage=1 00:06:44.125 --rc genhtml_legend=1 00:06:44.125 --rc geninfo_all_blocks=1 00:06:44.125 --rc geninfo_unexecuted_blocks=1 00:06:44.125 00:06:44.125 ' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.125 --rc genhtml_branch_coverage=1 00:06:44.125 --rc genhtml_function_coverage=1 00:06:44.125 --rc genhtml_legend=1 00:06:44.125 --rc geninfo_all_blocks=1 00:06:44.125 --rc geninfo_unexecuted_blocks=1 00:06:44.125 00:06:44.125 ' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.125 --rc genhtml_branch_coverage=1 00:06:44.125 --rc genhtml_function_coverage=1 00:06:44.125 --rc genhtml_legend=1 00:06:44.125 --rc geninfo_all_blocks=1 00:06:44.125 --rc geninfo_unexecuted_blocks=1 00:06:44.125 00:06:44.125 ' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.125 --rc genhtml_branch_coverage=1 00:06:44.125 --rc genhtml_function_coverage=1 00:06:44.125 --rc genhtml_legend=1 00:06:44.125 --rc geninfo_all_blocks=1 00:06:44.125 --rc geninfo_unexecuted_blocks=1 00:06:44.125 00:06:44.125 ' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.125 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.126 17:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:49.414 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:49.414 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.414 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:49.415 Found net devices under 0000:31:00.0: cvl_0_0 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:49.415 Found net devices under 0000:31:00.1: cvl_0_1 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:49.415 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:49.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:06:49.675 00:06:49.675 --- 10.0.0.2 ping statistics --- 00:06:49.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.675 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:49.675 00:06:49.675 --- 10.0.0.1 ping statistics --- 00:06:49.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.675 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2818567 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2818567 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2818567 ']' 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.675 17:43:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.675 [2024-12-06 17:43:37.347595] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:06:49.675 [2024-12-06 17:43:37.347658] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.675 [2024-12-06 17:43:37.442501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.676 [2024-12-06 17:43:37.495591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.676 [2024-12-06 17:43:37.495644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.676 [2024-12-06 17:43:37.495653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.676 [2024-12-06 17:43:37.495660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.676 [2024-12-06 17:43:37.495666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.676 [2024-12-06 17:43:37.497621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.676 [2024-12-06 17:43:37.497789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.676 [2024-12-06 17:43:37.497789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:50.611 [2024-12-06 17:43:38.308407] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.611 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.880 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:50.880 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.880 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:50.880 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:51.139 17:43:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:51.398 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6977b69e-8844-48b0-a125-d4ae37ca945f 00:06:51.398 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6977b69e-8844-48b0-a125-d4ae37ca945f lvol 20 00:06:51.398 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=533a6ab5-b522-4ee6-b39e-f46a3dbac171 00:06:51.398 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:51.657 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 533a6ab5-b522-4ee6-b39e-f46a3dbac171 00:06:51.657 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:51.915 [2024-12-06 17:43:39.618430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.915 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.174 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2819262 00:06:52.174 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:52.174 17:43:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:53.110 17:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 533a6ab5-b522-4ee6-b39e-f46a3dbac171 MY_SNAPSHOT 00:06:53.368 17:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d48ffff9-6854-4d4b-83ae-9f329feed091 00:06:53.368 17:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 533a6ab5-b522-4ee6-b39e-f46a3dbac171 30 00:06:53.368 17:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d48ffff9-6854-4d4b-83ae-9f329feed091 MY_CLONE 00:06:53.626 17:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a4964e16-6d68-4ef3-b14f-65bb2e1bbcfe 00:06:53.626 17:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a4964e16-6d68-4ef3-b14f-65bb2e1bbcfe 00:06:53.884 17:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2819262 00:07:03.880 Initializing NVMe Controllers 00:07:03.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:03.880 Controller IO queue size 128, less than required. 00:07:03.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:03.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:03.880 Initialization complete. Launching workers. 00:07:03.880 ======================================================== 00:07:03.880 Latency(us) 00:07:03.880 Device Information : IOPS MiB/s Average min max 00:07:03.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16493.10 64.43 7762.64 1695.77 44820.59 00:07:03.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17233.60 67.32 7428.65 918.73 52915.55 00:07:03.880 ======================================================== 00:07:03.880 Total : 33726.70 131.74 7591.98 918.73 52915.55 00:07:03.880 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 533a6ab5-b522-4ee6-b39e-f46a3dbac171 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6977b69e-8844-48b0-a125-d4ae37ca945f 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.880 rmmod nvme_tcp 00:07:03.880 rmmod nvme_fabrics 00:07:03.880 rmmod nvme_keyring 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2818567 ']' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2818567 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2818567 ']' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2818567 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2818567 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2818567' 00:07:03.880 killing process with pid 2818567 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2818567 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2818567 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.880 17:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:05.257 00:07:05.257 real 0m21.473s 00:07:05.257 user 1m1.945s 00:07:05.257 sys 0m7.244s 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:05.257 ************************************ 00:07:05.257 END TEST nvmf_lvol 00:07:05.257 ************************************ 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.257 ************************************ 00:07:05.257 START TEST nvmf_lvs_grow 00:07:05.257 ************************************ 00:07:05.257 17:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:05.257 * Looking for test storage... 00:07:05.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.257 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.257 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.257 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.258 --rc genhtml_branch_coverage=1 00:07:05.258 --rc genhtml_function_coverage=1 00:07:05.258 --rc genhtml_legend=1 00:07:05.258 --rc geninfo_all_blocks=1 00:07:05.258 --rc geninfo_unexecuted_blocks=1 00:07:05.258 00:07:05.258 ' 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.258 --rc genhtml_branch_coverage=1 00:07:05.258 --rc genhtml_function_coverage=1 00:07:05.258 --rc genhtml_legend=1 00:07:05.258 --rc geninfo_all_blocks=1 00:07:05.258 --rc geninfo_unexecuted_blocks=1 00:07:05.258 00:07:05.258 ' 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.258 --rc genhtml_branch_coverage=1 00:07:05.258 --rc genhtml_function_coverage=1 00:07:05.258 --rc genhtml_legend=1 00:07:05.258 --rc geninfo_all_blocks=1 00:07:05.258 --rc geninfo_unexecuted_blocks=1 00:07:05.258 00:07:05.258 ' 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.258 --rc genhtml_branch_coverage=1 00:07:05.258 --rc genhtml_function_coverage=1 00:07:05.258 --rc genhtml_legend=1 00:07:05.258 --rc geninfo_all_blocks=1 00:07:05.258 --rc geninfo_unexecuted_blocks=1 00:07:05.258 00:07:05.258 ' 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.258 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:05.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:05.517 17:43:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:10.789 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:10.789 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:10.789 Found net devices under 0000:31:00.0: cvl_0_0 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:10.789 Found net devices under 0000:31:00.1: cvl_0_1 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:10.789 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:10.790 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:11.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:07:11.049 00:07:11.049 --- 10.0.0.2 ping statistics --- 00:07:11.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.049 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:07:11.049 00:07:11.049 --- 10.0.0.1 ping statistics --- 00:07:11.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.049 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2825964 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2825964 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2825964 ']' 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.049 17:43:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.049 [2024-12-06 17:43:58.723945] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:11.049 [2024-12-06 17:43:58.723999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.049 [2024-12-06 17:43:58.798107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.049 [2024-12-06 17:43:58.831935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.049 [2024-12-06 17:43:58.831966] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.049 [2024-12-06 17:43:58.831971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.050 [2024-12-06 17:43:58.831976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.050 [2024-12-06 17:43:58.831984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.050 [2024-12-06 17:43:58.832487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:11.988 [2024-12-06 17:43:59.667739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.988 ************************************ 00:07:11.988 START TEST lvs_grow_clean 00:07:11.988 ************************************ 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.988 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.248 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:12.248 17:43:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:12.248 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:12.248 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:12.248 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:12.508 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:12.508 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:12.508 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e lvol 150 00:07:12.768 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=61a11d1f-17b9-483c-a2ea-8306ea9aaa99 00:07:12.768 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.768 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:12.768 [2024-12-06 17:44:00.516550] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:12.768 [2024-12-06 17:44:00.516589] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:12.768 true 00:07:12.768 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:12.768 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:13.026 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:13.026 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.026 17:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61a11d1f-17b9-483c-a2ea-8306ea9aaa99 00:07:13.285 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:13.544 [2024-12-06 17:44:01.142401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.544 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.544 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2826624 00:07:13.544 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2826624 /var/tmp/bdevperf.sock 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2826624 ']' 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:13.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.545 17:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:13.545 [2024-12-06 17:44:01.346843] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:13.545 [2024-12-06 17:44:01.346897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2826624 ] 00:07:13.804 [2024-12-06 17:44:01.424331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.804 [2024-12-06 17:44:01.460355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.387 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.387 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:14.388 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:14.647 Nvme0n1 00:07:14.647 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.907 [ 00:07:14.907 { 00:07:14.907 "name": "Nvme0n1", 00:07:14.907 "aliases": [ 00:07:14.907 "61a11d1f-17b9-483c-a2ea-8306ea9aaa99" 00:07:14.907 ], 00:07:14.907 "product_name": "NVMe disk", 00:07:14.907 "block_size": 4096, 00:07:14.907 "num_blocks": 38912, 00:07:14.907 "uuid": "61a11d1f-17b9-483c-a2ea-8306ea9aaa99", 00:07:14.907 "numa_id": 0, 00:07:14.907 "assigned_rate_limits": { 00:07:14.907 "rw_ios_per_sec": 0, 00:07:14.907 "rw_mbytes_per_sec": 0, 00:07:14.907 "r_mbytes_per_sec": 0, 00:07:14.907 "w_mbytes_per_sec": 0 00:07:14.907 }, 00:07:14.907 "claimed": false, 00:07:14.907 "zoned": false, 00:07:14.907 "supported_io_types": { 00:07:14.907 "read": true, 00:07:14.907 "write": true, 00:07:14.907 "unmap": true, 00:07:14.907 "flush": true, 00:07:14.907 "reset": true, 00:07:14.907 "nvme_admin": true, 00:07:14.907 "nvme_io": true, 00:07:14.907 "nvme_io_md": false, 00:07:14.907 "write_zeroes": true, 00:07:14.907 "zcopy": false, 00:07:14.907 "get_zone_info": false, 00:07:14.907 "zone_management": false, 00:07:14.907 "zone_append": false, 00:07:14.907 "compare": true, 00:07:14.907 "compare_and_write": true, 00:07:14.907 "abort": true, 00:07:14.907 "seek_hole": false, 00:07:14.907 "seek_data": false, 00:07:14.907 "copy": true, 00:07:14.907 "nvme_iov_md": false 00:07:14.907 }, 00:07:14.907 "memory_domains": [ 00:07:14.907 { 00:07:14.907 "dma_device_id": "system", 00:07:14.907 "dma_device_type": 1 00:07:14.907 } 00:07:14.907 ], 00:07:14.907 "driver_specific": { 00:07:14.907 "nvme": [ 00:07:14.907 { 00:07:14.907 "trid": { 00:07:14.908 "trtype": "TCP", 00:07:14.908 "adrfam": "IPv4", 00:07:14.908 "traddr": "10.0.0.2", 00:07:14.908 "trsvcid": "4420", 00:07:14.908 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.908 }, 00:07:14.908 "ctrlr_data": { 00:07:14.908 "cntlid": 1, 00:07:14.908 "vendor_id": "0x8086", 00:07:14.908 "model_number": "SPDK bdev Controller", 00:07:14.908 "serial_number": "SPDK0", 00:07:14.908 "firmware_revision": "25.01", 00:07:14.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.908 "oacs": { 00:07:14.908 "security": 0, 00:07:14.908 "format": 0, 00:07:14.908 "firmware": 0, 00:07:14.908 "ns_manage": 0 00:07:14.908 }, 00:07:14.908 "multi_ctrlr": true, 00:07:14.908 "ana_reporting": false 00:07:14.908 }, 00:07:14.908 "vs": { 00:07:14.908 "nvme_version": "1.3" 00:07:14.908 }, 00:07:14.908 "ns_data": { 00:07:14.908 "id": 1, 00:07:14.908 "can_share": true 00:07:14.908 } 00:07:14.908 } 00:07:14.908 ], 00:07:14.908 "mp_policy": "active_passive" 00:07:14.908 } 00:07:14.908 } 00:07:14.908 ] 00:07:14.908 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2826800 00:07:14.908 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.908 17:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.908 Running I/O for 10 seconds... 00:07:15.848 Latency(us) 00:07:15.848 [2024-12-06T16:44:03.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.848 Nvme0n1 : 1.00 25282.00 98.76 0.00 0.00 0.00 0.00 0.00 00:07:15.848 [2024-12-06T16:44:03.675Z] =================================================================================================================== 00:07:15.848 [2024-12-06T16:44:03.675Z] Total : 25282.00 98.76 0.00 0.00 0.00 0.00 0.00 00:07:15.848 00:07:16.787 17:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:17.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.047 Nvme0n1 : 2.00 25450.00 99.41 0.00 0.00 0.00 0.00 0.00 00:07:17.047 [2024-12-06T16:44:04.874Z] =================================================================================================================== 00:07:17.047 [2024-12-06T16:44:04.874Z] Total : 25450.00 99.41 0.00 0.00 0.00 0.00 0.00 00:07:17.047 00:07:17.047 true 00:07:17.047 17:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:17.047 17:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:17.307 17:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:17.307 17:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:17.307 17:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2826800 00:07:17.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.875 Nvme0n1 : 3.00 25499.67 99.61 0.00 0.00 0.00 0.00 0.00 00:07:17.875 [2024-12-06T16:44:05.702Z] =================================================================================================================== 00:07:17.875 [2024-12-06T16:44:05.702Z] Total : 25499.67 99.61 0.00 0.00 0.00 0.00 0.00 00:07:17.875 00:07:18.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.812 Nvme0n1 : 4.00 25555.50 99.83 0.00 0.00 0.00 0.00 0.00 00:07:18.812 [2024-12-06T16:44:06.639Z] =================================================================================================================== 00:07:18.812 [2024-12-06T16:44:06.639Z] Total : 25555.50 99.83 0.00 0.00 0.00 0.00 0.00 00:07:18.812 00:07:20.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.188 Nvme0n1 : 5.00 25590.00 99.96 0.00 0.00 0.00 0.00 0.00 00:07:20.188 [2024-12-06T16:44:08.015Z] =================================================================================================================== 00:07:20.188 [2024-12-06T16:44:08.015Z] Total : 25590.00 99.96 0.00 0.00 0.00 0.00 0.00 00:07:20.188 00:07:21.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.123 Nvme0n1 : 6.00 25613.00 100.05 0.00 0.00 0.00 0.00 0.00 00:07:21.123 [2024-12-06T16:44:08.950Z] =================================================================================================================== 00:07:21.123 [2024-12-06T16:44:08.950Z] Total : 25613.00 100.05 0.00 0.00 0.00 0.00 0.00 00:07:21.123 00:07:22.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.060 Nvme0n1 : 7.00 25629.14 100.11 0.00 0.00 0.00 0.00 0.00 00:07:22.060 [2024-12-06T16:44:09.887Z] =================================================================================================================== 00:07:22.060 [2024-12-06T16:44:09.887Z] Total : 25629.14 100.11 0.00 0.00 0.00 0.00 0.00 00:07:22.060 00:07:22.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.995 Nvme0n1 : 8.00 25640.38 100.16 0.00 0.00 0.00 0.00 0.00 00:07:22.995 [2024-12-06T16:44:10.822Z] =================================================================================================================== 00:07:22.995 [2024-12-06T16:44:10.822Z] Total : 25640.38 100.16 0.00 0.00 0.00 0.00 0.00 00:07:22.995 00:07:23.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.931 Nvme0n1 : 9.00 25657.22 100.22 0.00 0.00 0.00 0.00 0.00 00:07:23.931 [2024-12-06T16:44:11.758Z] =================================================================================================================== 00:07:23.931 [2024-12-06T16:44:11.758Z] Total : 25657.22 100.22 0.00 0.00 0.00 0.00 0.00 00:07:23.931 00:07:24.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.869 Nvme0n1 : 10.00 25670.70 100.28 0.00 0.00 0.00 0.00 0.00 00:07:24.869 [2024-12-06T16:44:12.696Z] =================================================================================================================== 00:07:24.869 [2024-12-06T16:44:12.696Z] Total : 25670.70 100.28 0.00 0.00 0.00 0.00 0.00 00:07:24.869 00:07:24.869 00:07:24.869 Latency(us) 00:07:24.869 [2024-12-06T16:44:12.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.869 Nvme0n1 : 10.01 25669.81 100.27 0.00 0.00 4982.98 2525.87 12615.68 00:07:24.869 [2024-12-06T16:44:12.696Z] =================================================================================================================== 00:07:24.869 [2024-12-06T16:44:12.696Z] Total : 25669.81 100.27 0.00 0.00 4982.98 2525.87 12615.68 00:07:24.869 { 00:07:24.869 "results": [ 00:07:24.869 { 00:07:24.869 "job": "Nvme0n1", 00:07:24.869 "core_mask": "0x2", 00:07:24.869 "workload": "randwrite", 00:07:24.869 "status": "finished", 00:07:24.869 "queue_depth": 128, 00:07:24.869 "io_size": 4096, 00:07:24.869 "runtime": 10.005332, 00:07:24.869 "iops": 25669.812855785294, 00:07:24.869 "mibps": 100.2727064679113, 00:07:24.869 "io_failed": 0, 00:07:24.869 "io_timeout": 0, 00:07:24.869 "avg_latency_us": 4982.979987410854, 00:07:24.869 "min_latency_us": 2525.866666666667, 00:07:24.869 "max_latency_us": 12615.68 00:07:24.869 } 00:07:24.869 ], 00:07:24.869 "core_count": 1 00:07:24.869 } 00:07:24.869 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2826624 00:07:24.869 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2826624 ']' 00:07:24.869 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2826624 00:07:24.869 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:24.869 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.869 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2826624 00:07:25.128 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:25.128 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:25.128 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2826624' 00:07:25.128 killing process with pid 2826624 00:07:25.128 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2826624 00:07:25.128 Received shutdown signal, test time was about 10.000000 seconds 00:07:25.128 00:07:25.128 Latency(us) 00:07:25.128 [2024-12-06T16:44:12.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.128 [2024-12-06T16:44:12.955Z] =================================================================================================================== 00:07:25.128 [2024-12-06T16:44:12.955Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:25.128 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2826624 00:07:25.128 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.387 17:44:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.387 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:25.387 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.645 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.645 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:25.645 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.645 [2024-12-06 17:44:13.464542] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:25.904 request: 00:07:25.904 { 00:07:25.904 "uuid": "aa535d7c-42b5-4675-bb4c-fb3bae9d792e", 00:07:25.904 "method": "bdev_lvol_get_lvstores", 00:07:25.904 "req_id": 1 00:07:25.904 } 00:07:25.904 Got JSON-RPC error response 00:07:25.904 response: 00:07:25.904 { 00:07:25.904 "code": -19, 00:07:25.904 "message": "No such device" 00:07:25.904 } 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.904 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.163 aio_bdev 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61a11d1f-17b9-483c-a2ea-8306ea9aaa99 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=61a11d1f-17b9-483c-a2ea-8306ea9aaa99 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.163 17:44:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61a11d1f-17b9-483c-a2ea-8306ea9aaa99 -t 2000 00:07:26.422 [ 00:07:26.422 { 00:07:26.422 "name": "61a11d1f-17b9-483c-a2ea-8306ea9aaa99", 00:07:26.422 "aliases": [ 00:07:26.422 "lvs/lvol" 00:07:26.422 ], 00:07:26.422 "product_name": "Logical Volume", 00:07:26.422 "block_size": 4096, 00:07:26.422 "num_blocks": 38912, 00:07:26.422 "uuid": "61a11d1f-17b9-483c-a2ea-8306ea9aaa99", 00:07:26.422 "assigned_rate_limits": { 00:07:26.422 "rw_ios_per_sec": 0, 00:07:26.422 "rw_mbytes_per_sec": 0, 00:07:26.422 "r_mbytes_per_sec": 0, 00:07:26.422 "w_mbytes_per_sec": 0 00:07:26.422 }, 00:07:26.422 "claimed": false, 00:07:26.422 "zoned": false, 00:07:26.422 "supported_io_types": { 00:07:26.422 "read": true, 00:07:26.422 "write": true, 00:07:26.422 "unmap": true, 00:07:26.422 "flush": false, 00:07:26.422 "reset": true, 00:07:26.422 "nvme_admin": false, 00:07:26.422 "nvme_io": false, 00:07:26.422 "nvme_io_md": false, 00:07:26.422 "write_zeroes": true, 00:07:26.422 "zcopy": false, 00:07:26.422 "get_zone_info": false, 00:07:26.422 "zone_management": false, 00:07:26.422 "zone_append": false, 00:07:26.422 "compare": false, 00:07:26.422 "compare_and_write": false, 00:07:26.422 "abort": false, 00:07:26.422 "seek_hole": true, 00:07:26.422 "seek_data": true, 00:07:26.422 "copy": false, 00:07:26.422 "nvme_iov_md": false 00:07:26.422 }, 00:07:26.422 "driver_specific": { 00:07:26.422 "lvol": { 00:07:26.422 "lvol_store_uuid": "aa535d7c-42b5-4675-bb4c-fb3bae9d792e", 00:07:26.422 "base_bdev": "aio_bdev", 00:07:26.422 "thin_provision": false, 00:07:26.422 "num_allocated_clusters": 38, 00:07:26.422 "snapshot": false, 00:07:26.422 "clone": false, 00:07:26.422 "esnap_clone": false 00:07:26.422 } 00:07:26.422 } 00:07:26.422 } 00:07:26.422 ] 00:07:26.422 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:26.422 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:26.422 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:26.680 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:26.680 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:26.680 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:26.680 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:26.680 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61a11d1f-17b9-483c-a2ea-8306ea9aaa99 00:07:26.964 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa535d7c-42b5-4675-bb4c-fb3bae9d792e 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.294 00:07:27.294 real 0m15.233s 00:07:27.294 user 0m14.929s 00:07:27.294 sys 0m1.154s 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:27.294 ************************************ 00:07:27.294 END TEST lvs_grow_clean 00:07:27.294 ************************************ 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:27.294 ************************************ 00:07:27.294 START TEST lvs_grow_dirty 00:07:27.294 ************************************ 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.294 17:44:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.573 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:27.573 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:27.573 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=09aea4b7-764c-4285-9c60-4f6903807b01 00:07:27.573 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:27.574 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:27.832 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:27.832 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:27.832 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 09aea4b7-764c-4285-9c60-4f6903807b01 lvol 150 00:07:27.832 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:27.832 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:27.832 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:28.091 [2024-12-06 17:44:15.782588] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:28.091 [2024-12-06 17:44:15.782628] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:28.091 true 00:07:28.091 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:28.091 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:28.350 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:28.350 17:44:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:28.350 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:28.610 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:28.610 [2024-12-06 17:44:16.384348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.610 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2830017 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2830017 /var/tmp/bdevperf.sock 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2830017 ']' 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.870 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.870 [2024-12-06 17:44:16.572029] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:28.870 [2024-12-06 17:44:16.572069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830017 ] 00:07:28.870 [2024-12-06 17:44:16.628055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.870 [2024-12-06 17:44:16.657973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.129 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.129 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:29.130 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:29.389 Nvme0n1 00:07:29.389 17:44:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.389 [ 00:07:29.389 { 00:07:29.389 "name": "Nvme0n1", 00:07:29.389 "aliases": [ 00:07:29.389 "b672204b-1ac7-482d-8bcf-5b063880f5ed" 00:07:29.389 ], 00:07:29.389 "product_name": "NVMe disk", 00:07:29.389 "block_size": 4096, 00:07:29.389 "num_blocks": 38912, 00:07:29.389 "uuid": "b672204b-1ac7-482d-8bcf-5b063880f5ed", 00:07:29.389 "numa_id": 0, 00:07:29.389 "assigned_rate_limits": { 00:07:29.389 "rw_ios_per_sec": 0, 00:07:29.389 "rw_mbytes_per_sec": 0, 00:07:29.389 "r_mbytes_per_sec": 0, 00:07:29.389 "w_mbytes_per_sec": 0 00:07:29.389 }, 00:07:29.389 "claimed": false, 00:07:29.389 "zoned": false, 00:07:29.389 "supported_io_types": { 00:07:29.389 "read": true, 00:07:29.389 "write": true, 00:07:29.389 "unmap": true, 00:07:29.389 "flush": true, 00:07:29.389 "reset": true, 00:07:29.389 "nvme_admin": true, 00:07:29.389 "nvme_io": true, 00:07:29.389 "nvme_io_md": false, 00:07:29.389 "write_zeroes": true, 00:07:29.389 "zcopy": false, 00:07:29.389 "get_zone_info": false, 00:07:29.389 "zone_management": false, 00:07:29.389 "zone_append": false, 00:07:29.389 "compare": true, 00:07:29.389 "compare_and_write": true, 00:07:29.389 "abort": true, 00:07:29.389 "seek_hole": false, 00:07:29.389 "seek_data": false, 00:07:29.389 "copy": true, 00:07:29.390 "nvme_iov_md": false 00:07:29.390 }, 00:07:29.390 "memory_domains": [ 00:07:29.390 { 00:07:29.390 "dma_device_id": "system", 00:07:29.390 "dma_device_type": 1 00:07:29.390 } 00:07:29.390 ], 00:07:29.390 "driver_specific": { 00:07:29.390 "nvme": [ 00:07:29.390 { 00:07:29.390 "trid": { 00:07:29.390 "trtype": "TCP", 00:07:29.390 "adrfam": "IPv4", 00:07:29.390 "traddr": "10.0.0.2", 00:07:29.390 "trsvcid": "4420", 00:07:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.390 }, 00:07:29.390 "ctrlr_data": { 00:07:29.390 "cntlid": 1, 00:07:29.390 "vendor_id": "0x8086", 00:07:29.390 "model_number": "SPDK bdev Controller", 00:07:29.390 "serial_number": "SPDK0", 00:07:29.390 "firmware_revision": "25.01", 00:07:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.390 "oacs": { 00:07:29.390 "security": 0, 00:07:29.390 "format": 0, 00:07:29.390 "firmware": 0, 00:07:29.390 "ns_manage": 0 00:07:29.390 }, 00:07:29.390 "multi_ctrlr": true, 00:07:29.390 "ana_reporting": false 00:07:29.390 }, 00:07:29.390 "vs": { 00:07:29.390 "nvme_version": "1.3" 00:07:29.390 }, 00:07:29.390 "ns_data": { 00:07:29.390 "id": 1, 00:07:29.390 "can_share": true 00:07:29.390 } 00:07:29.390 } 00:07:29.390 ], 00:07:29.390 "mp_policy": "active_passive" 00:07:29.390 } 00:07:29.390 } 00:07:29.390 ] 00:07:29.390 17:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2830105 00:07:29.390 17:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.390 17:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.650 Running I/O for 10 seconds... 00:07:30.583 Latency(us) 00:07:30.583 [2024-12-06T16:44:18.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.583 Nvme0n1 : 1.00 25354.00 99.04 0.00 0.00 0.00 0.00 0.00 00:07:30.583 [2024-12-06T16:44:18.410Z] =================================================================================================================== 00:07:30.583 [2024-12-06T16:44:18.410Z] Total : 25354.00 99.04 0.00 0.00 0.00 0.00 0.00 00:07:30.583 00:07:31.517 17:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:31.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.517 Nvme0n1 : 2.00 25476.50 99.52 0.00 0.00 0.00 0.00 0.00 00:07:31.517 [2024-12-06T16:44:19.344Z] =================================================================================================================== 00:07:31.517 [2024-12-06T16:44:19.344Z] Total : 25476.50 99.52 0.00 0.00 0.00 0.00 0.00 00:07:31.517 00:07:31.517 true 00:07:31.517 17:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:31.517 17:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:31.775 17:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:31.775 17:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:31.775 17:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2830105 00:07:32.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.710 Nvme0n1 : 3.00 25508.67 99.64 0.00 0.00 0.00 0.00 0.00 00:07:32.710 [2024-12-06T16:44:20.537Z] =================================================================================================================== 00:07:32.710 [2024-12-06T16:44:20.537Z] Total : 25508.67 99.64 0.00 0.00 0.00 0.00 0.00 00:07:32.710 00:07:33.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.658 Nvme0n1 : 4.00 25563.25 99.86 0.00 0.00 0.00 0.00 0.00 00:07:33.658 [2024-12-06T16:44:21.485Z] =================================================================================================================== 00:07:33.658 [2024-12-06T16:44:21.485Z] Total : 25563.25 99.86 0.00 0.00 0.00 0.00 0.00 00:07:33.658 00:07:34.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.595 Nvme0n1 : 5.00 25596.00 99.98 0.00 0.00 0.00 0.00 0.00 00:07:34.595 [2024-12-06T16:44:22.422Z] =================================================================================================================== 00:07:34.595 [2024-12-06T16:44:22.422Z] Total : 25596.00 99.98 0.00 0.00 0.00 0.00 0.00 00:07:34.595 00:07:35.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.531 Nvme0n1 : 6.00 25613.00 100.05 0.00 0.00 0.00 0.00 0.00 00:07:35.531 [2024-12-06T16:44:23.358Z] =================================================================================================================== 00:07:35.531 [2024-12-06T16:44:23.358Z] Total : 25613.00 100.05 0.00 0.00 0.00 0.00 0.00 00:07:35.531 00:07:36.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.468 Nvme0n1 : 7.00 25633.86 100.13 0.00 0.00 0.00 0.00 0.00 00:07:36.468 [2024-12-06T16:44:24.295Z] =================================================================================================================== 00:07:36.468 [2024-12-06T16:44:24.295Z] Total : 25633.86 100.13 0.00 0.00 0.00 0.00 0.00 00:07:36.468 00:07:37.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.849 Nvme0n1 : 8.00 25653.62 100.21 0.00 0.00 0.00 0.00 0.00 00:07:37.849 [2024-12-06T16:44:25.676Z] =================================================================================================================== 00:07:37.849 [2024-12-06T16:44:25.676Z] Total : 25653.62 100.21 0.00 0.00 0.00 0.00 0.00 00:07:37.849 00:07:38.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.785 Nvme0n1 : 9.00 25661.78 100.24 0.00 0.00 0.00 0.00 0.00 00:07:38.785 [2024-12-06T16:44:26.612Z] =================================================================================================================== 00:07:38.785 [2024-12-06T16:44:26.612Z] Total : 25661.78 100.24 0.00 0.00 0.00 0.00 0.00 00:07:38.785 00:07:39.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.724 Nvme0n1 : 10.00 25674.20 100.29 0.00 0.00 0.00 0.00 0.00 00:07:39.724 [2024-12-06T16:44:27.551Z] =================================================================================================================== 00:07:39.724 [2024-12-06T16:44:27.551Z] Total : 25674.20 100.29 0.00 0.00 0.00 0.00 0.00 00:07:39.724 00:07:39.724 00:07:39.724 Latency(us) 00:07:39.724 [2024-12-06T16:44:27.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.724 Nvme0n1 : 10.00 25672.13 100.28 0.00 0.00 4982.57 2048.00 8683.52 00:07:39.724 [2024-12-06T16:44:27.551Z] =================================================================================================================== 00:07:39.724 [2024-12-06T16:44:27.551Z] Total : 25672.13 100.28 0.00 0.00 4982.57 2048.00 8683.52 00:07:39.724 { 00:07:39.724 "results": [ 00:07:39.724 { 00:07:39.724 "job": "Nvme0n1", 00:07:39.724 "core_mask": "0x2", 00:07:39.724 "workload": "randwrite", 00:07:39.724 "status": "finished", 00:07:39.724 "queue_depth": 128, 00:07:39.724 "io_size": 4096, 00:07:39.724 "runtime": 10.00326, 00:07:39.724 "iops": 25672.130885331382, 00:07:39.724 "mibps": 100.28176127082571, 00:07:39.724 "io_failed": 0, 00:07:39.724 "io_timeout": 0, 00:07:39.724 "avg_latency_us": 4982.565101665985, 00:07:39.724 "min_latency_us": 2048.0, 00:07:39.724 "max_latency_us": 8683.52 00:07:39.724 } 00:07:39.724 ], 00:07:39.724 "core_count": 1 00:07:39.724 } 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2830017 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2830017 ']' 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2830017 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2830017 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2830017' 00:07:39.724 killing process with pid 2830017 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2830017 00:07:39.724 Received shutdown signal, test time was about 10.000000 seconds 00:07:39.724 00:07:39.724 Latency(us) 00:07:39.724 [2024-12-06T16:44:27.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.724 [2024-12-06T16:44:27.551Z] =================================================================================================================== 00:07:39.724 [2024-12-06T16:44:27.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2830017 00:07:39.724 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.984 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:39.984 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:39.984 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2825964 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2825964 00:07:40.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2825964 Killed "${NVMF_APP[@]}" "$@" 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2832459 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2832459 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2832459 ']' 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:40.243 17:44:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.243 [2024-12-06 17:44:27.975769] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:40.243 [2024-12-06 17:44:27.975823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.243 [2024-12-06 17:44:28.041913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.502 [2024-12-06 17:44:28.070711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.502 [2024-12-06 17:44:28.070737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.502 [2024-12-06 17:44:28.070743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.502 [2024-12-06 17:44:28.070747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.502 [2024-12-06 17:44:28.070751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.502 [2024-12-06 17:44:28.071248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.502 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.502 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:40.502 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:40.502 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:40.503 [2024-12-06 17:44:28.308488] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:40.503 [2024-12-06 17:44:28.308562] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:40.503 [2024-12-06 17:44:28.308585] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.503 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:40.761 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b672204b-1ac7-482d-8bcf-5b063880f5ed -t 2000 00:07:41.020 [ 00:07:41.020 { 00:07:41.020 "name": "b672204b-1ac7-482d-8bcf-5b063880f5ed", 00:07:41.020 "aliases": [ 00:07:41.020 "lvs/lvol" 00:07:41.020 ], 00:07:41.020 "product_name": "Logical Volume", 00:07:41.020 "block_size": 4096, 00:07:41.020 "num_blocks": 38912, 00:07:41.020 "uuid": "b672204b-1ac7-482d-8bcf-5b063880f5ed", 00:07:41.020 "assigned_rate_limits": { 00:07:41.020 "rw_ios_per_sec": 0, 00:07:41.020 "rw_mbytes_per_sec": 0, 00:07:41.020 "r_mbytes_per_sec": 0, 00:07:41.020 "w_mbytes_per_sec": 0 00:07:41.020 }, 00:07:41.020 "claimed": false, 00:07:41.020 "zoned": false, 00:07:41.020 "supported_io_types": { 00:07:41.020 "read": true, 00:07:41.020 "write": true, 00:07:41.020 "unmap": true, 00:07:41.020 "flush": false, 00:07:41.020 "reset": true, 00:07:41.020 "nvme_admin": false, 00:07:41.020 "nvme_io": false, 00:07:41.020 "nvme_io_md": false, 00:07:41.020 "write_zeroes": true, 00:07:41.020 "zcopy": false, 00:07:41.020 "get_zone_info": false, 00:07:41.020 "zone_management": false, 00:07:41.020 "zone_append": false, 00:07:41.020 "compare": false, 00:07:41.020 "compare_and_write": false, 00:07:41.020 "abort": false, 00:07:41.020 "seek_hole": true, 00:07:41.020 "seek_data": true, 00:07:41.020 "copy": false, 00:07:41.020 "nvme_iov_md": false 00:07:41.020 }, 00:07:41.020 "driver_specific": { 00:07:41.020 "lvol": { 00:07:41.020 "lvol_store_uuid": "09aea4b7-764c-4285-9c60-4f6903807b01", 00:07:41.020 "base_bdev": "aio_bdev", 00:07:41.020 "thin_provision": false, 00:07:41.020 "num_allocated_clusters": 38, 00:07:41.020 "snapshot": false, 00:07:41.020 "clone": false, 00:07:41.020 "esnap_clone": false 00:07:41.020 } 00:07:41.020 } 00:07:41.020 } 00:07:41.020 ] 00:07:41.020 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:41.020 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:41.020 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:41.020 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:41.020 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:41.020 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:41.278 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:41.278 17:44:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:41.278 [2024-12-06 17:44:29.068843] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.278 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:41.536 request: 00:07:41.536 { 00:07:41.536 "uuid": "09aea4b7-764c-4285-9c60-4f6903807b01", 00:07:41.536 "method": "bdev_lvol_get_lvstores", 00:07:41.536 "req_id": 1 00:07:41.536 } 00:07:41.536 Got JSON-RPC error response 00:07:41.536 response: 00:07:41.536 { 00:07:41.536 "code": -19, 00:07:41.536 "message": "No such device" 00:07:41.536 } 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.536 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.794 aio_bdev 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.794 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b672204b-1ac7-482d-8bcf-5b063880f5ed -t 2000 00:07:42.053 [ 00:07:42.053 { 00:07:42.053 "name": "b672204b-1ac7-482d-8bcf-5b063880f5ed", 00:07:42.053 "aliases": [ 00:07:42.053 "lvs/lvol" 00:07:42.053 ], 00:07:42.053 "product_name": "Logical Volume", 00:07:42.053 "block_size": 4096, 00:07:42.053 "num_blocks": 38912, 00:07:42.053 "uuid": "b672204b-1ac7-482d-8bcf-5b063880f5ed", 00:07:42.053 "assigned_rate_limits": { 00:07:42.053 "rw_ios_per_sec": 0, 00:07:42.053 "rw_mbytes_per_sec": 0, 00:07:42.053 "r_mbytes_per_sec": 0, 00:07:42.053 "w_mbytes_per_sec": 0 00:07:42.053 }, 00:07:42.053 "claimed": false, 00:07:42.053 "zoned": false, 00:07:42.053 "supported_io_types": { 00:07:42.053 "read": true, 00:07:42.053 "write": true, 00:07:42.053 "unmap": true, 00:07:42.053 "flush": false, 00:07:42.053 "reset": true, 00:07:42.053 "nvme_admin": false, 00:07:42.053 "nvme_io": false, 00:07:42.053 "nvme_io_md": false, 00:07:42.053 "write_zeroes": true, 00:07:42.053 "zcopy": false, 00:07:42.053 "get_zone_info": false, 00:07:42.053 "zone_management": false, 00:07:42.053 "zone_append": false, 00:07:42.053 "compare": false, 00:07:42.053 "compare_and_write": false, 00:07:42.053 "abort": false, 00:07:42.053 "seek_hole": true, 00:07:42.053 "seek_data": true, 00:07:42.053 "copy": false, 00:07:42.053 "nvme_iov_md": false 00:07:42.053 }, 00:07:42.053 "driver_specific": { 00:07:42.053 "lvol": { 00:07:42.053 "lvol_store_uuid": "09aea4b7-764c-4285-9c60-4f6903807b01", 00:07:42.053 "base_bdev": "aio_bdev", 00:07:42.053 "thin_provision": false, 00:07:42.053 "num_allocated_clusters": 38, 00:07:42.053 "snapshot": false, 00:07:42.053 "clone": false, 00:07:42.053 "esnap_clone": false 00:07:42.053 } 00:07:42.053 } 00:07:42.053 } 00:07:42.053 ] 00:07:42.054 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:42.054 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:42.054 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:42.311 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:42.311 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:42.311 17:44:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:42.311 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:42.312 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b672204b-1ac7-482d-8bcf-5b063880f5ed 00:07:42.570 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 09aea4b7-764c-4285-9c60-4f6903807b01 00:07:42.570 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:42.829 00:07:42.829 real 0m15.539s 00:07:42.829 user 0m42.576s 00:07:42.829 sys 0m2.567s 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:42.829 ************************************ 00:07:42.829 END TEST lvs_grow_dirty 00:07:42.829 ************************************ 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:42.829 nvmf_trace.0 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:42.829 rmmod nvme_tcp 00:07:42.829 rmmod nvme_fabrics 00:07:42.829 rmmod nvme_keyring 00:07:42.829 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2832459 ']' 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2832459 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2832459 ']' 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2832459 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832459 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832459' 00:07:43.088 killing process with pid 2832459 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2832459 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2832459 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.088 17:44:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.673 00:07:45.673 real 0m39.908s 00:07:45.673 user 1m2.115s 00:07:45.673 sys 0m8.305s 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.673 ************************************ 00:07:45.673 END TEST nvmf_lvs_grow 00:07:45.673 ************************************ 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.673 ************************************ 00:07:45.673 START TEST nvmf_bdev_io_wait 00:07:45.673 ************************************ 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:45.673 * Looking for test storage... 00:07:45.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:45.673 17:44:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:45.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.673 --rc genhtml_branch_coverage=1 00:07:45.673 --rc genhtml_function_coverage=1 00:07:45.673 --rc genhtml_legend=1 00:07:45.673 --rc geninfo_all_blocks=1 00:07:45.673 --rc geninfo_unexecuted_blocks=1 00:07:45.673 00:07:45.673 ' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:45.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.673 --rc genhtml_branch_coverage=1 00:07:45.673 --rc genhtml_function_coverage=1 00:07:45.673 --rc genhtml_legend=1 00:07:45.673 --rc geninfo_all_blocks=1 00:07:45.673 --rc geninfo_unexecuted_blocks=1 00:07:45.673 00:07:45.673 ' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:45.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.673 --rc genhtml_branch_coverage=1 00:07:45.673 --rc genhtml_function_coverage=1 00:07:45.673 --rc genhtml_legend=1 00:07:45.673 --rc geninfo_all_blocks=1 00:07:45.673 --rc geninfo_unexecuted_blocks=1 00:07:45.673 00:07:45.673 ' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:45.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.673 --rc genhtml_branch_coverage=1 00:07:45.673 --rc genhtml_function_coverage=1 00:07:45.673 --rc genhtml_legend=1 00:07:45.673 --rc geninfo_all_blocks=1 00:07:45.673 --rc geninfo_unexecuted_blocks=1 00:07:45.673 00:07:45.673 ' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.673 17:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:50.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:50.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:50.947 Found net devices under 0000:31:00.0: cvl_0_0 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:50.947 Found net devices under 0000:31:00.1: cvl_0_1 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.947 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:07:50.948 00:07:50.948 --- 10.0.0.2 ping statistics --- 00:07:50.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.948 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:07:50.948 00:07:50.948 --- 10.0.0.1 ping statistics --- 00:07:50.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.948 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2837533 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2837533 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2837533 ']' 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.948 17:44:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:50.948 [2024-12-06 17:44:38.556731] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:50.948 [2024-12-06 17:44:38.556796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.948 [2024-12-06 17:44:38.647403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.948 [2024-12-06 17:44:38.701042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.948 [2024-12-06 17:44:38.701095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.948 [2024-12-06 17:44:38.701113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.948 [2024-12-06 17:44:38.701121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.948 [2024-12-06 17:44:38.701127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.948 [2024-12-06 17:44:38.703311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.948 [2024-12-06 17:44:38.703586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.948 [2024-12-06 17:44:38.703748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.948 [2024-12-06 17:44:38.703750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 [2024-12-06 17:44:39.428358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 Malloc0 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.885 [2024-12-06 17:44:39.471477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2837887 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2837889 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2837890 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.885 { 00:07:51.885 "params": { 00:07:51.885 "name": "Nvme$subsystem", 00:07:51.885 "trtype": "$TEST_TRANSPORT", 00:07:51.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.885 "adrfam": "ipv4", 00:07:51.885 "trsvcid": "$NVMF_PORT", 00:07:51.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.885 "hdgst": ${hdgst:-false}, 00:07:51.885 "ddgst": ${ddgst:-false} 00:07:51.885 }, 00:07:51.885 "method": "bdev_nvme_attach_controller" 00:07:51.885 } 00:07:51.885 EOF 00:07:51.885 )") 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2837892 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.885 { 00:07:51.885 "params": { 00:07:51.885 "name": "Nvme$subsystem", 00:07:51.885 "trtype": "$TEST_TRANSPORT", 00:07:51.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.885 "adrfam": "ipv4", 00:07:51.885 "trsvcid": "$NVMF_PORT", 00:07:51.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.885 "hdgst": ${hdgst:-false}, 00:07:51.885 "ddgst": ${ddgst:-false} 00:07:51.885 }, 00:07:51.885 "method": "bdev_nvme_attach_controller" 00:07:51.885 } 00:07:51.885 EOF 00:07:51.885 )") 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.885 { 00:07:51.885 "params": { 00:07:51.885 "name": "Nvme$subsystem", 00:07:51.885 "trtype": "$TEST_TRANSPORT", 00:07:51.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.885 "adrfam": "ipv4", 00:07:51.885 "trsvcid": "$NVMF_PORT", 00:07:51.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.885 "hdgst": ${hdgst:-false}, 00:07:51.885 "ddgst": ${ddgst:-false} 00:07:51.885 }, 00:07:51.885 "method": "bdev_nvme_attach_controller" 00:07:51.885 } 00:07:51.885 EOF 00:07:51.885 )") 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2837887 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.885 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.885 { 00:07:51.885 "params": { 00:07:51.886 "name": "Nvme$subsystem", 00:07:51.886 "trtype": "$TEST_TRANSPORT", 00:07:51.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.886 "adrfam": "ipv4", 00:07:51.886 "trsvcid": "$NVMF_PORT", 00:07:51.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.886 "hdgst": ${hdgst:-false}, 00:07:51.886 "ddgst": ${ddgst:-false} 00:07:51.886 }, 00:07:51.886 "method": "bdev_nvme_attach_controller" 00:07:51.886 } 00:07:51.886 EOF 00:07:51.886 )") 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.886 "params": { 00:07:51.886 "name": "Nvme1", 00:07:51.886 "trtype": "tcp", 00:07:51.886 "traddr": "10.0.0.2", 00:07:51.886 "adrfam": "ipv4", 00:07:51.886 "trsvcid": "4420", 00:07:51.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.886 "hdgst": false, 00:07:51.886 "ddgst": false 00:07:51.886 }, 00:07:51.886 "method": "bdev_nvme_attach_controller" 00:07:51.886 }' 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.886 "params": { 00:07:51.886 "name": "Nvme1", 00:07:51.886 "trtype": "tcp", 00:07:51.886 "traddr": "10.0.0.2", 00:07:51.886 "adrfam": "ipv4", 00:07:51.886 "trsvcid": "4420", 00:07:51.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.886 "hdgst": false, 00:07:51.886 "ddgst": false 00:07:51.886 }, 00:07:51.886 "method": "bdev_nvme_attach_controller" 00:07:51.886 }' 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.886 "params": { 00:07:51.886 "name": "Nvme1", 00:07:51.886 "trtype": "tcp", 00:07:51.886 "traddr": "10.0.0.2", 00:07:51.886 "adrfam": "ipv4", 00:07:51.886 "trsvcid": "4420", 00:07:51.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.886 "hdgst": false, 00:07:51.886 "ddgst": false 00:07:51.886 }, 00:07:51.886 "method": "bdev_nvme_attach_controller" 00:07:51.886 }' 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:51.886 17:44:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:51.886 "params": { 00:07:51.886 "name": "Nvme1", 00:07:51.886 "trtype": "tcp", 00:07:51.886 "traddr": "10.0.0.2", 00:07:51.886 "adrfam": "ipv4", 00:07:51.886 "trsvcid": "4420", 00:07:51.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:51.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:51.886 "hdgst": false, 00:07:51.886 "ddgst": false 00:07:51.886 }, 00:07:51.886 "method": "bdev_nvme_attach_controller" 00:07:51.886 }' 00:07:51.886 [2024-12-06 17:44:39.509929] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:51.886 [2024-12-06 17:44:39.509930] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:51.886 [2024-12-06 17:44:39.509983] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 17:44:39.509984] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:51.886 --proc-type=auto ] 00:07:51.886 [2024-12-06 17:44:39.511297] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:51.886 [2024-12-06 17:44:39.511298] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:07:51.886 [2024-12-06 17:44:39.511349] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-06 17:44:39.511349] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:51.886 --proc-type=auto ] 00:07:51.886 [2024-12-06 17:44:39.670920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.886 [2024-12-06 17:44:39.699711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:52.144 [2024-12-06 17:44:39.723442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.144 [2024-12-06 17:44:39.753502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:52.144 [2024-12-06 17:44:39.762117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.144 [2024-12-06 17:44:39.790865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:52.144 [2024-12-06 17:44:39.801410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.144 [2024-12-06 17:44:39.829924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:52.144 Running I/O for 1 seconds... 00:07:52.402 Running I/O for 1 seconds... 00:07:52.402 Running I/O for 1 seconds... 00:07:52.402 Running I/O for 1 seconds... 00:07:53.337 12048.00 IOPS, 47.06 MiB/s 00:07:53.337 Latency(us) 00:07:53.337 [2024-12-06T16:44:41.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.337 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:53.337 Nvme1n1 : 1.01 12044.45 47.05 0.00 0.00 10555.20 4068.69 17803.95 00:07:53.337 [2024-12-06T16:44:41.164Z] =================================================================================================================== 00:07:53.337 [2024-12-06T16:44:41.164Z] Total : 12044.45 47.05 0.00 0.00 10555.20 4068.69 17803.95 00:07:53.337 18282.00 IOPS, 71.41 MiB/s 00:07:53.337 Latency(us) 00:07:53.337 [2024-12-06T16:44:41.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.337 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:53.337 Nvme1n1 : 1.01 18324.43 71.58 0.00 0.00 6967.72 3276.80 15073.28 00:07:53.337 [2024-12-06T16:44:41.164Z] =================================================================================================================== 00:07:53.337 [2024-12-06T16:44:41.164Z] Total : 18324.43 71.58 0.00 0.00 6967.72 3276.80 15073.28 00:07:53.337 11371.00 IOPS, 44.42 MiB/s 00:07:53.337 Latency(us) 00:07:53.337 [2024-12-06T16:44:41.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.337 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:53.337 Nvme1n1 : 1.01 11499.99 44.92 0.00 0.00 11105.36 2689.71 27088.21 00:07:53.337 [2024-12-06T16:44:41.164Z] =================================================================================================================== 00:07:53.337 [2024-12-06T16:44:41.164Z] Total : 11499.99 44.92 0.00 0.00 11105.36 2689.71 27088.21 00:07:53.337 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2837889 00:07:53.337 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2837890 00:07:53.337 180880.00 IOPS, 706.56 MiB/s 00:07:53.337 Latency(us) 00:07:53.337 [2024-12-06T16:44:41.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.337 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:53.337 Nvme1n1 : 1.00 180525.14 705.18 0.00 0.00 704.73 296.96 1966.08 00:07:53.337 [2024-12-06T16:44:41.164Z] =================================================================================================================== 00:07:53.337 [2024-12-06T16:44:41.164Z] Total : 180525.14 705.18 0.00 0.00 704.73 296.96 1966.08 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2837892 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.597 rmmod nvme_tcp 00:07:53.597 rmmod nvme_fabrics 00:07:53.597 rmmod nvme_keyring 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2837533 ']' 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2837533 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2837533 ']' 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2837533 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837533 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837533' 00:07:53.597 killing process with pid 2837533 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2837533 00:07:53.597 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2837533 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.856 17:44:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.759 00:07:55.759 real 0m10.560s 00:07:55.759 user 0m17.645s 00:07:55.759 sys 0m5.377s 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.759 ************************************ 00:07:55.759 END TEST nvmf_bdev_io_wait 00:07:55.759 ************************************ 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:55.759 ************************************ 00:07:55.759 START TEST nvmf_queue_depth 00:07:55.759 ************************************ 00:07:55.759 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:55.759 * Looking for test storage... 00:07:55.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.018 --rc genhtml_branch_coverage=1 00:07:56.018 --rc genhtml_function_coverage=1 00:07:56.018 --rc genhtml_legend=1 00:07:56.018 --rc geninfo_all_blocks=1 00:07:56.018 --rc geninfo_unexecuted_blocks=1 00:07:56.018 00:07:56.018 ' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.018 --rc genhtml_branch_coverage=1 00:07:56.018 --rc genhtml_function_coverage=1 00:07:56.018 --rc genhtml_legend=1 00:07:56.018 --rc geninfo_all_blocks=1 00:07:56.018 --rc geninfo_unexecuted_blocks=1 00:07:56.018 00:07:56.018 ' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.018 --rc genhtml_branch_coverage=1 00:07:56.018 --rc genhtml_function_coverage=1 00:07:56.018 --rc genhtml_legend=1 00:07:56.018 --rc geninfo_all_blocks=1 00:07:56.018 --rc geninfo_unexecuted_blocks=1 00:07:56.018 00:07:56.018 ' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.018 --rc genhtml_branch_coverage=1 00:07:56.018 --rc genhtml_function_coverage=1 00:07:56.018 --rc genhtml_legend=1 00:07:56.018 --rc geninfo_all_blocks=1 00:07:56.018 --rc geninfo_unexecuted_blocks=1 00:07:56.018 00:07:56.018 ' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.018 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.019 17:44:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:02.581 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.581 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:02.581 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:02.582 Found net devices under 0000:31:00.0: cvl_0_0 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:02.582 Found net devices under 0000:31:00.1: cvl_0_1 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:08:02.582 00:08:02.582 --- 10.0.0.2 ping statistics --- 00:08:02.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.582 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:08:02.582 00:08:02.582 --- 10.0.0.1 ping statistics --- 00:08:02.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.582 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2842604 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2842604 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2842604 ']' 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.582 17:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 [2024-12-06 17:44:49.449605] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:08:02.582 [2024-12-06 17:44:49.449656] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.582 [2024-12-06 17:44:49.538754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.582 [2024-12-06 17:44:49.578515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.582 [2024-12-06 17:44:49.578559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.582 [2024-12-06 17:44:49.578568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.582 [2024-12-06 17:44:49.578575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.582 [2024-12-06 17:44:49.578581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.582 [2024-12-06 17:44:49.579284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.582 [2024-12-06 17:44:50.280063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.582 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 Malloc0 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 [2024-12-06 17:44:50.325384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2842954 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2842954 /var/tmp/bdevperf.sock 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2842954 ']' 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.583 17:44:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.583 [2024-12-06 17:44:50.366884] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:08:02.583 [2024-12-06 17:44:50.366944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842954 ] 00:08:02.842 [2024-12-06 17:44:50.451156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.842 [2024-12-06 17:44:50.504613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.408 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.408 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:03.408 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:03.408 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.408 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.666 NVMe0n1 00:08:03.666 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.666 17:44:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:03.666 Running I/O for 10 seconds... 00:08:05.978 11264.00 IOPS, 44.00 MiB/s [2024-12-06T16:44:54.771Z] 11902.00 IOPS, 46.49 MiB/s [2024-12-06T16:44:55.705Z] 12514.00 IOPS, 48.88 MiB/s [2024-12-06T16:44:56.639Z] 12792.25 IOPS, 49.97 MiB/s [2024-12-06T16:44:57.575Z] 12947.60 IOPS, 50.58 MiB/s [2024-12-06T16:44:58.511Z] 13132.33 IOPS, 51.30 MiB/s [2024-12-06T16:44:59.885Z] 13222.71 IOPS, 51.65 MiB/s [2024-12-06T16:45:00.821Z] 13310.75 IOPS, 52.00 MiB/s [2024-12-06T16:45:01.754Z] 13398.33 IOPS, 52.34 MiB/s [2024-12-06T16:45:01.754Z] 13418.80 IOPS, 52.42 MiB/s 00:08:13.928 Latency(us) 00:08:13.928 [2024-12-06T16:45:01.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.928 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:13.928 Verification LBA range: start 0x0 length 0x4000 00:08:13.928 NVMe0n1 : 10.04 13459.18 52.57 0.00 0.00 75836.69 10158.08 57671.68 00:08:13.928 [2024-12-06T16:45:01.755Z] =================================================================================================================== 00:08:13.928 [2024-12-06T16:45:01.755Z] Total : 13459.18 52.57 0.00 0.00 75836.69 10158.08 57671.68 00:08:13.928 { 00:08:13.928 "results": [ 00:08:13.928 { 00:08:13.928 "job": "NVMe0n1", 00:08:13.928 "core_mask": "0x1", 00:08:13.928 "workload": "verify", 00:08:13.928 "status": "finished", 00:08:13.928 "verify_range": { 00:08:13.928 "start": 0, 00:08:13.928 "length": 16384 00:08:13.928 }, 00:08:13.928 "queue_depth": 1024, 00:08:13.928 "io_size": 4096, 00:08:13.928 "runtime": 10.044299, 00:08:13.928 "iops": 13459.177190961758, 00:08:13.928 "mibps": 52.57491090219437, 00:08:13.928 "io_failed": 0, 00:08:13.928 "io_timeout": 0, 00:08:13.928 "avg_latency_us": 75836.6881920486, 00:08:13.928 "min_latency_us": 10158.08, 00:08:13.928 "max_latency_us": 57671.68 00:08:13.928 } 00:08:13.928 ], 00:08:13.928 "core_count": 1 00:08:13.928 } 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2842954 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2842954 ']' 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2842954 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2842954 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2842954' 00:08:13.928 killing process with pid 2842954 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2842954 00:08:13.928 Received shutdown signal, test time was about 10.000000 seconds 00:08:13.928 00:08:13.928 Latency(us) 00:08:13.928 [2024-12-06T16:45:01.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.928 [2024-12-06T16:45:01.755Z] =================================================================================================================== 00:08:13.928 [2024-12-06T16:45:01.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2842954 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.928 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:13.928 rmmod nvme_tcp 00:08:13.928 rmmod nvme_fabrics 00:08:13.928 rmmod nvme_keyring 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2842604 ']' 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2842604 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2842604 ']' 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2842604 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2842604 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2842604' 00:08:14.187 killing process with pid 2842604 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2842604 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2842604 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.187 17:45:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.720 00:08:16.720 real 0m20.445s 00:08:16.720 user 0m24.910s 00:08:16.720 sys 0m5.600s 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:16.720 ************************************ 00:08:16.720 END TEST nvmf_queue_depth 00:08:16.720 ************************************ 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.720 17:45:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.720 ************************************ 00:08:16.720 START TEST nvmf_target_multipath 00:08:16.720 ************************************ 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:16.720 * Looking for test storage... 00:08:16.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:16.720 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.721 --rc genhtml_branch_coverage=1 00:08:16.721 --rc genhtml_function_coverage=1 00:08:16.721 --rc genhtml_legend=1 00:08:16.721 --rc geninfo_all_blocks=1 00:08:16.721 --rc geninfo_unexecuted_blocks=1 00:08:16.721 00:08:16.721 ' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.721 --rc genhtml_branch_coverage=1 00:08:16.721 --rc genhtml_function_coverage=1 00:08:16.721 --rc genhtml_legend=1 00:08:16.721 --rc geninfo_all_blocks=1 00:08:16.721 --rc geninfo_unexecuted_blocks=1 00:08:16.721 00:08:16.721 ' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.721 --rc genhtml_branch_coverage=1 00:08:16.721 --rc genhtml_function_coverage=1 00:08:16.721 --rc genhtml_legend=1 00:08:16.721 --rc geninfo_all_blocks=1 00:08:16.721 --rc geninfo_unexecuted_blocks=1 00:08:16.721 00:08:16.721 ' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:16.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.721 --rc genhtml_branch_coverage=1 00:08:16.721 --rc genhtml_function_coverage=1 00:08:16.721 --rc genhtml_legend=1 00:08:16.721 --rc geninfo_all_blocks=1 00:08:16.721 --rc geninfo_unexecuted_blocks=1 00:08:16.721 00:08:16.721 ' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:16.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:16.721 17:45:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.144 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:22.145 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:22.145 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:22.145 Found net devices under 0000:31:00.0: cvl_0_0 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:22.145 Found net devices under 0000:31:00.1: cvl_0_1 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:22.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:22.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:08:22.145 00:08:22.145 --- 10.0.0.2 ping statistics --- 00:08:22.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.145 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:22.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:22.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:08:22.145 00:08:22.145 --- 10.0.0.1 ping statistics --- 00:08:22.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:22.145 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:22.145 only one NIC for nvmf test 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.145 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.146 rmmod nvme_tcp 00:08:22.146 rmmod nvme_fabrics 00:08:22.146 rmmod nvme_keyring 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.146 17:45:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.049 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.049 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.050 00:08:24.050 real 0m7.750s 00:08:24.050 user 0m1.391s 00:08:24.050 sys 0m4.216s 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:24.050 ************************************ 00:08:24.050 END TEST nvmf_target_multipath 00:08:24.050 ************************************ 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.050 ************************************ 00:08:24.050 START TEST nvmf_zcopy 00:08:24.050 ************************************ 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:24.050 * Looking for test storage... 00:08:24.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:24.050 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:24.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.310 --rc genhtml_branch_coverage=1 00:08:24.310 --rc genhtml_function_coverage=1 00:08:24.310 --rc genhtml_legend=1 00:08:24.310 --rc geninfo_all_blocks=1 00:08:24.310 --rc geninfo_unexecuted_blocks=1 00:08:24.310 00:08:24.310 ' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:24.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.310 --rc genhtml_branch_coverage=1 00:08:24.310 --rc genhtml_function_coverage=1 00:08:24.310 --rc genhtml_legend=1 00:08:24.310 --rc geninfo_all_blocks=1 00:08:24.310 --rc geninfo_unexecuted_blocks=1 00:08:24.310 00:08:24.310 ' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:24.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.310 --rc genhtml_branch_coverage=1 00:08:24.310 --rc genhtml_function_coverage=1 00:08:24.310 --rc genhtml_legend=1 00:08:24.310 --rc geninfo_all_blocks=1 00:08:24.310 --rc geninfo_unexecuted_blocks=1 00:08:24.310 00:08:24.310 ' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:24.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.310 --rc genhtml_branch_coverage=1 00:08:24.310 --rc genhtml_function_coverage=1 00:08:24.310 --rc genhtml_legend=1 00:08:24.310 --rc geninfo_all_blocks=1 00:08:24.310 --rc geninfo_unexecuted_blocks=1 00:08:24.310 00:08:24.310 ' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.310 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.311 17:45:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:29.589 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.589 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:29.590 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:29.590 Found net devices under 0000:31:00.0: cvl_0_0 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:29.590 Found net devices under 0000:31:00.1: cvl_0_1 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.590 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:08:29.849 00:08:29.849 --- 10.0.0.2 ping statistics --- 00:08:29.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.849 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:29.849 00:08:29.849 --- 10.0.0.1 ping statistics --- 00:08:29.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.849 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2854820 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2854820 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2854820 ']' 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:29.849 17:45:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.849 [2024-12-06 17:45:17.516443] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:08:29.849 [2024-12-06 17:45:17.516510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.849 [2024-12-06 17:45:17.605254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.849 [2024-12-06 17:45:17.641976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.849 [2024-12-06 17:45:17.642012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.849 [2024-12-06 17:45:17.642020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.849 [2024-12-06 17:45:17.642027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.849 [2024-12-06 17:45:17.642033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.849 [2024-12-06 17:45:17.642663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 [2024-12-06 17:45:18.327080] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 [2024-12-06 17:45:18.343258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 malloc0 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.788 { 00:08:30.788 "params": { 00:08:30.788 "name": "Nvme$subsystem", 00:08:30.788 "trtype": "$TEST_TRANSPORT", 00:08:30.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.788 "adrfam": "ipv4", 00:08:30.788 "trsvcid": "$NVMF_PORT", 00:08:30.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.788 "hdgst": ${hdgst:-false}, 00:08:30.788 "ddgst": ${ddgst:-false} 00:08:30.788 }, 00:08:30.788 "method": "bdev_nvme_attach_controller" 00:08:30.788 } 00:08:30.788 EOF 00:08:30.788 )") 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:30.788 17:45:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.788 "params": { 00:08:30.788 "name": "Nvme1", 00:08:30.788 "trtype": "tcp", 00:08:30.788 "traddr": "10.0.0.2", 00:08:30.788 "adrfam": "ipv4", 00:08:30.788 "trsvcid": "4420", 00:08:30.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.788 "hdgst": false, 00:08:30.788 "ddgst": false 00:08:30.788 }, 00:08:30.788 "method": "bdev_nvme_attach_controller" 00:08:30.788 }' 00:08:30.788 [2024-12-06 17:45:18.409285] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:08:30.788 [2024-12-06 17:45:18.409338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2854901 ] 00:08:30.788 [2024-12-06 17:45:18.489731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.788 [2024-12-06 17:45:18.541151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.047 Running I/O for 10 seconds... 00:08:33.369 8190.00 IOPS, 63.98 MiB/s [2024-12-06T16:45:22.131Z] 9099.00 IOPS, 71.09 MiB/s [2024-12-06T16:45:23.065Z] 9419.33 IOPS, 73.59 MiB/s [2024-12-06T16:45:24.002Z] 9575.75 IOPS, 74.81 MiB/s [2024-12-06T16:45:24.937Z] 9668.20 IOPS, 75.53 MiB/s [2024-12-06T16:45:25.882Z] 9736.00 IOPS, 76.06 MiB/s [2024-12-06T16:45:27.256Z] 9782.71 IOPS, 76.43 MiB/s [2024-12-06T16:45:28.192Z] 9815.25 IOPS, 76.68 MiB/s [2024-12-06T16:45:29.131Z] 9845.22 IOPS, 76.92 MiB/s [2024-12-06T16:45:29.131Z] 9864.40 IOPS, 77.07 MiB/s 00:08:41.304 Latency(us) 00:08:41.304 [2024-12-06T16:45:29.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.304 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:41.304 Verification LBA range: start 0x0 length 0x1000 00:08:41.304 Nvme1n1 : 10.01 9865.00 77.07 0.00 0.00 12934.42 935.25 27088.21 00:08:41.304 [2024-12-06T16:45:29.131Z] =================================================================================================================== 00:08:41.304 [2024-12-06T16:45:29.131Z] Total : 9865.00 77.07 0.00 0.00 12934.42 935.25 27088.21 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2857240 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.304 { 00:08:41.304 "params": { 00:08:41.304 "name": "Nvme$subsystem", 00:08:41.304 "trtype": "$TEST_TRANSPORT", 00:08:41.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.304 "adrfam": "ipv4", 00:08:41.304 "trsvcid": "$NVMF_PORT", 00:08:41.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.304 "hdgst": ${hdgst:-false}, 00:08:41.304 "ddgst": ${ddgst:-false} 00:08:41.304 }, 00:08:41.304 "method": "bdev_nvme_attach_controller" 00:08:41.304 } 00:08:41.304 EOF 00:08:41.304 )") 00:08:41.304 [2024-12-06 17:45:28.968906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:28.968936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:41.304 17:45:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.304 "params": { 00:08:41.304 "name": "Nvme1", 00:08:41.304 "trtype": "tcp", 00:08:41.304 "traddr": "10.0.0.2", 00:08:41.304 "adrfam": "ipv4", 00:08:41.304 "trsvcid": "4420", 00:08:41.304 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.304 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.304 "hdgst": false, 00:08:41.304 "ddgst": false 00:08:41.304 }, 00:08:41.304 "method": "bdev_nvme_attach_controller" 00:08:41.304 }' 00:08:41.304 [2024-12-06 17:45:28.976894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:28.976903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:28.984913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:28.984921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:28.989061] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:08:41.304 [2024-12-06 17:45:28.989110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2857240 ] 00:08:41.304 [2024-12-06 17:45:28.992934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:28.992942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.000954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.000963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.012984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.012992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.021004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.021016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.029026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.029034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.037045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.037052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.045066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.045074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.045422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.304 [2024-12-06 17:45:29.053087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.053096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.061109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.061118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.069129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.069137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.074314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.304 [2024-12-06 17:45:29.077151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.077159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.085172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.085180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.093192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.093204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.101223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.101233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.109230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.109240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.117249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.117258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.304 [2024-12-06 17:45:29.125270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.304 [2024-12-06 17:45:29.125279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.133290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.133299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.141310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.141317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.149341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.149357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.157355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.157364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.165376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.165388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.173397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.173407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.181415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.181423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.189436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.189444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.197465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.197474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.205492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.205500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.213509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.213517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.221530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.221540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.229553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.229564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.237572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.237581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.245593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.245602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.253615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.253623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.261636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.261644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.269657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.269665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.277678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.277687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.285697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.285705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.293719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.293727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.301740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.301748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.309765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.309772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.317784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.317796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.325805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.325813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.333824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.333833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.341845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.341854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.349867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.349876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.357888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.357898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.564 [2024-12-06 17:45:29.365907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.564 [2024-12-06 17:45:29.365916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.824 [2024-12-06 17:45:29.410592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.410608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.418048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.418059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 Running I/O for 5 seconds... 00:08:41.825 [2024-12-06 17:45:29.426068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.426076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.437115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.437132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.445443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.445460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.454210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.454226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.463577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.463593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.472627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.472644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.481715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.481731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.490378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.490396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.499288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.499304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.508371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.508387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.517615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.517631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.526552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.526568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.535596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.535612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.544584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.544601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.553636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.553652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.562676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.562693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.571159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.571174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.580235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.580251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.589165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.589180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.598196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.598212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.607211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.607227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.616087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.616108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.624780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.624795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.633155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.633170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.641871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.641887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.825 [2024-12-06 17:45:29.650538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.825 [2024-12-06 17:45:29.650554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.659420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.659436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.668290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.668306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.677581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.677596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.686123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.686138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.694831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.694846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.703134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.703150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.085 [2024-12-06 17:45:29.711951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.085 [2024-12-06 17:45:29.711967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.720667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.720682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.729169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.729185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.737504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.737520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.746388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.746404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.755582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.755598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.764146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.764161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.773226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.773242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.781647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.781662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.790816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.790832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.799251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.799266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.808011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.808027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.816989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.817005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.826028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.826044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.834730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.834746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.843490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.843506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.852792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.852808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.861797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.861812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.870264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.870279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.879068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.879083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.887711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.887726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.896764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.896779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.086 [2024-12-06 17:45:29.905734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.086 [2024-12-06 17:45:29.905749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.914737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.914753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.923676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.923692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.932724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.932739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.941722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.941737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.950729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.950743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.959927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.959943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.968788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.968803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.977912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.977927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.985989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.986003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:29.994917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:29.994932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.004090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.004112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.013075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.013090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.021407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.021422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.029974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.029989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.038503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.038518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.047491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.047506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.056587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.056602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.065444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.065459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.074623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.074638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.083711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.083726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.092936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.092951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.101912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.101926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.110864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.110878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.120322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.120337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.129354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.129369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.137838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.137853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.146177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.146192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.155236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.155251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.164191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.164206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.346 [2024-12-06 17:45:30.172557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.346 [2024-12-06 17:45:30.172571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.181888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.181907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.190724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.190739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.199754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.199769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.208654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.208669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.217643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.217658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.226310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.226325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.235699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.235713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.244755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.244770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.253844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.253859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.262426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.262441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.270905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.270920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.279990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.280005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.288770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.288785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.297824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.297838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.306816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.306831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.315850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.315865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.324797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.324812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.333504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.333519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.342077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.342092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.350911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.350929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.359774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.359790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.368325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.368340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.607 [2024-12-06 17:45:30.376825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.607 [2024-12-06 17:45:30.376839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.608 [2024-12-06 17:45:30.385509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.608 [2024-12-06 17:45:30.385524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.608 [2024-12-06 17:45:30.394481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.608 [2024-12-06 17:45:30.394496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.608 [2024-12-06 17:45:30.403678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.608 [2024-12-06 17:45:30.403693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.608 [2024-12-06 17:45:30.412642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.608 [2024-12-06 17:45:30.412656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.608 [2024-12-06 17:45:30.421515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.608 [2024-12-06 17:45:30.421529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.608 19303.00 IOPS, 150.80 MiB/s [2024-12-06T16:45:30.435Z] [2024-12-06 17:45:30.430574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.608 [2024-12-06 17:45:30.430589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.439655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.439670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.448569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.448584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.457702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.457718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.466686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.466701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.475624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.475639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.484701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.484716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.493439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.493454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.502484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.502499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.511411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.511426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.520384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.520404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.529251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.529266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.538174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.538189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.547466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.547481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.556643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.556658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.565634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.565648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.574463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.574478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.583362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.583377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.592023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.592038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.600844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.600859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.609567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.609582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.618747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.618762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.627832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.627847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.636423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.636438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.645386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.645402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.654537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.654551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.663298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.663312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.672267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.672282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.681625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.681641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.868 [2024-12-06 17:45:30.689516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.868 [2024-12-06 17:45:30.689531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.698755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.698770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.707598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.707613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.716973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.716988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.725741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.725756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.734198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.734213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.742751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.742766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.751203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.751218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.760309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.760324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.768685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.768700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.777108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.777123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.786442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.786457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.795605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.795620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.804946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.804961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.813920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.813935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.823055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.823070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.832053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.832068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.840894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.840909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.849773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.849788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.858914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.858930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.867918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.867933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.876472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.876488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.885229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.885245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.894105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.894120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.903067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.903083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.912153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.912169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.921313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.921328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.930441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.930457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.939183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.939198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.128 [2024-12-06 17:45:30.947938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.128 [2024-12-06 17:45:30.947953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.388 [2024-12-06 17:45:30.956725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.388 [2024-12-06 17:45:30.956740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.388 [2024-12-06 17:45:30.965861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.388 [2024-12-06 17:45:30.965877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.388 [2024-12-06 17:45:30.975022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.388 [2024-12-06 17:45:30.975038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.388 [2024-12-06 17:45:30.983924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.388 [2024-12-06 17:45:30.983939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:30.992990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:30.993006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.001938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.001953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.010883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.010898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.019701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.019717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.029302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.029318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.037946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.037961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.047130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.047146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.055665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.055680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.065014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.065030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.074108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.074124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.083020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.083036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.092285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.092301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.100825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.100841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.109721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.109736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.118886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.118902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.128019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.128035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.137141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.137157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.146143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.146158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.154954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.154969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.163898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.163913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.172328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.172344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.181408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.181423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.189748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.189764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.199095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.199115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.389 [2024-12-06 17:45:31.208036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.389 [2024-12-06 17:45:31.208051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.216997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.217012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.225829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.225844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.235300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.235316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.243141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.243157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.252657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.252673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.261237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.261253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.270167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.270183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.279183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.279198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.288283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.288299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.296996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.297012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.306180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.306195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.315229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.315244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.324286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.324301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.333040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.333055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.342030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.342045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.350437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.350452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.359459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.359478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.368268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.368283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.376980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.376995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.385986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.386001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.394935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.394949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.404133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.404148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.413009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.413024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.421576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.421591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 19444.00 IOPS, 151.91 MiB/s [2024-12-06T16:45:31.475Z] [2024-12-06 17:45:31.430676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.430691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.439620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.439635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.448371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.448386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.457535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.457550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.648 [2024-12-06 17:45:31.466673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.648 [2024-12-06 17:45:31.466688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.475673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.475688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.484056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.484071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.493418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.493433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.501825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.501840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.510872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.510886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.519961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.519976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.528798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.528817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.537941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.537956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.546897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.546912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.555718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.555733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.564709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.564724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.573695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.573710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.582651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.582666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.590979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.590993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.599584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.599599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.608903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.608918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.617299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.617313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.626390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.626405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.635473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.635488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.644147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.644162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.653171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.653186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.661932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.661946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.670996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.671011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.680009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.680024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.688452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.688466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.697322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.697340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.706343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.706358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.715167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.715181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.724457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.724472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.907 [2024-12-06 17:45:31.733059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.907 [2024-12-06 17:45:31.733074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.741697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.741712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.751000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.751015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.759386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.759400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.768342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.768357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.777299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.777314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.786239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.786253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.794891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.794906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.803641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.803656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.812285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.812299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.821500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.821515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.830499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.830513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.839566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.839580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.848279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.848294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.856899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.856913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.865907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.865921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.875073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.875087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.884097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.884116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.893109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.893124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.902168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.902183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.911230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.911245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.919957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.919972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.928973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.928987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.938203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.938218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.947316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.947331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.956029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.956044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.965064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.965079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.974146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.974161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.983088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.983108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.166 [2024-12-06 17:45:31.991974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.166 [2024-12-06 17:45:31.991989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.000702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.000717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.009616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.009631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.018571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.018586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.027220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.027235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.036133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.036148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.045104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.045118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.053875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.053889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.062533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.062548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.071577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.071592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.080494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.080510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.089418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.089433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.098125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.098140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.107035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.107050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.115868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.115883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.124780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.124795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.133743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.133758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.142382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.142396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.151141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.151156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.160195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.160209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.169191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.169206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.178187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.178201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.186699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.186713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.195736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.195752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.204849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.204864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.213798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.213813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.222836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.222850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.231787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.231802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.240809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.240825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.426 [2024-12-06 17:45:32.249825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.426 [2024-12-06 17:45:32.249840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.685 [2024-12-06 17:45:32.258896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.685 [2024-12-06 17:45:32.258911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.685 [2024-12-06 17:45:32.267948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.685 [2024-12-06 17:45:32.267963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.685 [2024-12-06 17:45:32.276397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.685 [2024-12-06 17:45:32.276412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.285780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.285795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.294289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.294304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.303408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.303422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.312043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.312057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.321307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.321322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.329838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.329853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.338970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.338985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.347876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.347891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.356493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.356507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.365330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.365349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.374341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.374356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.383523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.383539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.392610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.392625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.401756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.401772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.410812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.410827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.419918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.419933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.428877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.428893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 19482.33 IOPS, 152.21 MiB/s [2024-12-06T16:45:32.513Z] [2024-12-06 17:45:32.438073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.438089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.446548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.446563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.455185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.455201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.464181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.464196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.473147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.473162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.482138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.482153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.490833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.490848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.499559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.499575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.686 [2024-12-06 17:45:32.508313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.686 [2024-12-06 17:45:32.508328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.517354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.517369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.526568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.526583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.535370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.535389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.544469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.544485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.553479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.553494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.562371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.562386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.571385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.571401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.580163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.580179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.589323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.589339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.597738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.597753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.606727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.606742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.615784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.615799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.624766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.624782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.633696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.633711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.642720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.642736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.651680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.651694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.660557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.660572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.669352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.669367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.678346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.678361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.687300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.687315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.696011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.696026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.704991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.705010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.713836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.713851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.722507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.945 [2024-12-06 17:45:32.722522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.945 [2024-12-06 17:45:32.731515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.946 [2024-12-06 17:45:32.731530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.946 [2024-12-06 17:45:32.740330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.946 [2024-12-06 17:45:32.740345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.946 [2024-12-06 17:45:32.749433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.946 [2024-12-06 17:45:32.749449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.946 [2024-12-06 17:45:32.758390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.946 [2024-12-06 17:45:32.758405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.946 [2024-12-06 17:45:32.767365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.946 [2024-12-06 17:45:32.767380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.776331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.776346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.784700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.784715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.793416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.793431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.802028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.802044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.811030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.811046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.820026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.820041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.829095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.829115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.838121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.838136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.847095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.847114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.856175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.856190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.865178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.865193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.874136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.874155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.883005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.204 [2024-12-06 17:45:32.883020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.204 [2024-12-06 17:45:32.892261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.892276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.901228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.901243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.910202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.910217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.918939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.918954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.927794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.927810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.936490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.936505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.945526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.945541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.954557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.954571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.963780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.963795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.972041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.972057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.980959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.980975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.990080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.990096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:32.998826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:32.998841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:33.007906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:33.007921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:33.016898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:33.016913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.205 [2024-12-06 17:45:33.025861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.205 [2024-12-06 17:45:33.025876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.035047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.035062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.043886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.043901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.052903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.052918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.061856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.061870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.070931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.070946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.079183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.079198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.087634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.087648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.096374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.096389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.105420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.105435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.114032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.114047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.122717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.122731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.131822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.131837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.140914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.140928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.148691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.148706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.158214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.158229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.167129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.167144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.175656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.175671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.184911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.184926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.193760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.193775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.202790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.202804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.211716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.211731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.220567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.220582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.229676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.229690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.238671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.238686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.247617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.247631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.256234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.256248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.264626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.264641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.273683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.273697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.464 [2024-12-06 17:45:33.282632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.464 [2024-12-06 17:45:33.282647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.291320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.291336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.300080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.300095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.309015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.309029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.317803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.317818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.330938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.330953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.338603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.338617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.347724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.347739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.356408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.356422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.364985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.723 [2024-12-06 17:45:33.365001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.723 [2024-12-06 17:45:33.373296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.373311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.382417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.382432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.391325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.391341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.399550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.399566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.408843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.408857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.417882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.417898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.426560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.426575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 19517.00 IOPS, 152.48 MiB/s [2024-12-06T16:45:33.551Z] [2024-12-06 17:45:33.435479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.435494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.444527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.444541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.453550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.453565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.462601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.462616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.471617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.471631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.480085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.480104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.488713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.488728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.497695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.497710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.506610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.506625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.515537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.515551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.524352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.524366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.533259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.533273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.724 [2024-12-06 17:45:33.542470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.724 [2024-12-06 17:45:33.542488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.551185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.551200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.560212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.560227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.569271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.569286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.578471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.578486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.586803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.586817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.595990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.596005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.605044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.605058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.983 [2024-12-06 17:45:33.613951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.983 [2024-12-06 17:45:33.613966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.622746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.622760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.631759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.631774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.640177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.640192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.649034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.649049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.657669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.657684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.666686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.666701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.675733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.675749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.684684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.684699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.693375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.693390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.702600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.702615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.711870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.711889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.720731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.720746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.729619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.729634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.738228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.738242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.747412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.747427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.756410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.756425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.765426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.765441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.773818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.773833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.782666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.782681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.791779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.791794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.800787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.800802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.984 [2024-12-06 17:45:33.809204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.984 [2024-12-06 17:45:33.809219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.817846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.817860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.826748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.826762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.835507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.835522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.844602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.844617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.853473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.853488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.862217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.862232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.871296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.871312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.879733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.879752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.888272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.888287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.897228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.897244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.906169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.906184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.914859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.914874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.923796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.923813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.932800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.932815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.941612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.941627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.950575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.950589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.959236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.959253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.968486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.968501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.977195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.977212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.985944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.985960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:33.994637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:33.994653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:34.004132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:34.004148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:34.012657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:34.012673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:34.021211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:34.021227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:34.030405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:34.030421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.243 [2024-12-06 17:45:34.039589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.243 [2024-12-06 17:45:34.039605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.244 [2024-12-06 17:45:34.048531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.244 [2024-12-06 17:45:34.048546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.244 [2024-12-06 17:45:34.057474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.244 [2024-12-06 17:45:34.057490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.244 [2024-12-06 17:45:34.066391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.244 [2024-12-06 17:45:34.066407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.074747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.074763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.083575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.083591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.092525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.092541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.101523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.101539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.110615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.110630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.119468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.119484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.128271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.128287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.137268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.137284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.146109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.146124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.155120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.155136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.164373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.164388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.173349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.173365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.182067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.182083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.191114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.191130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.200243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.200259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.209246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.209262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.218237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.218252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.227318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.227334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.235964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.235980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.244779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.244795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.253974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.253990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.262589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.262605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.271592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.271608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.280890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.280906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.289963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.289979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.298848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.298864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.307691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.307707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.316405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.316421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.502 [2024-12-06 17:45:34.325365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.502 [2024-12-06 17:45:34.325382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.334443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.334459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.342891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.342907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.352097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.352117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.360477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.360493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.369211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.369227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.377836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.377851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.386860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.386875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.395312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.395326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.404292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.404307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.413272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.413286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.422169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.422184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.431193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.431207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 19533.80 IOPS, 152.61 MiB/s 00:08:46.761 Latency(us) 00:08:46.761 [2024-12-06T16:45:34.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.761 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:46.761 Nvme1n1 : 5.00 19542.41 152.68 0.00 0.00 6544.90 2566.83 15837.87 00:08:46.761 [2024-12-06T16:45:34.588Z] =================================================================================================================== 00:08:46.761 [2024-12-06T16:45:34.588Z] Total : 19542.41 152.68 0.00 0.00 6544.90 2566.83 15837.87 00:08:46.761 [2024-12-06 17:45:34.439261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.439275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.445399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.445410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.453420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.453431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.461445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.461455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.469465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.469475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.477483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.477492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.485501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.485509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.493519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.493528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.501541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.501549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.509574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.509589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.517581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.517589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.525604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.525614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 [2024-12-06 17:45:34.533622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.761 [2024-12-06 17:45:34.533630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2857240) - No such process 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2857240 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.761 delay0 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.761 17:45:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:47.021 [2024-12-06 17:45:34.687238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:55.147 Initializing NVMe Controllers 00:08:55.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:55.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:55.147 Initialization complete. Launching workers. 00:08:55.147 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 35910 00:08:55.147 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 36024, failed to submit 120 00:08:55.147 success 35946, unsuccessful 78, failed 0 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.147 rmmod nvme_tcp 00:08:55.147 rmmod nvme_fabrics 00:08:55.147 rmmod nvme_keyring 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2854820 ']' 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2854820 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2854820 ']' 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2854820 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:55.147 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854820 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854820' 00:08:55.148 killing process with pid 2854820 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2854820 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2854820 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.148 17:45:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.148 17:45:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.528 00:08:56.528 real 0m32.230s 00:08:56.528 user 0m44.886s 00:08:56.528 sys 0m9.433s 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.528 ************************************ 00:08:56.528 END TEST nvmf_zcopy 00:08:56.528 ************************************ 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.528 ************************************ 00:08:56.528 START TEST nvmf_nmic 00:08:56.528 ************************************ 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:56.528 * Looking for test storage... 00:08:56.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.528 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.529 --rc genhtml_branch_coverage=1 00:08:56.529 --rc genhtml_function_coverage=1 00:08:56.529 --rc genhtml_legend=1 00:08:56.529 --rc geninfo_all_blocks=1 00:08:56.529 --rc geninfo_unexecuted_blocks=1 00:08:56.529 00:08:56.529 ' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.529 --rc genhtml_branch_coverage=1 00:08:56.529 --rc genhtml_function_coverage=1 00:08:56.529 --rc genhtml_legend=1 00:08:56.529 --rc geninfo_all_blocks=1 00:08:56.529 --rc geninfo_unexecuted_blocks=1 00:08:56.529 00:08:56.529 ' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.529 --rc genhtml_branch_coverage=1 00:08:56.529 --rc genhtml_function_coverage=1 00:08:56.529 --rc genhtml_legend=1 00:08:56.529 --rc geninfo_all_blocks=1 00:08:56.529 --rc geninfo_unexecuted_blocks=1 00:08:56.529 00:08:56.529 ' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.529 --rc genhtml_branch_coverage=1 00:08:56.529 --rc genhtml_function_coverage=1 00:08:56.529 --rc genhtml_legend=1 00:08:56.529 --rc geninfo_all_blocks=1 00:08:56.529 --rc geninfo_unexecuted_blocks=1 00:08:56.529 00:08:56.529 ' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.529 17:45:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:01.802 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:01.802 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:01.802 Found net devices under 0000:31:00.0: cvl_0_0 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:01.802 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:01.803 Found net devices under 0000:31:00.1: cvl_0_1 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.803 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:02.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:09:02.061 00:09:02.061 --- 10.0.0.2 ping statistics --- 00:09:02.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.061 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:09:02.061 00:09:02.061 --- 10.0.0.1 ping statistics --- 00:09:02.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.061 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.061 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2864584 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2864584 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2864584 ']' 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.062 17:45:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.062 [2024-12-06 17:45:49.808117] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:09:02.062 [2024-12-06 17:45:49.808167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.062 [2024-12-06 17:45:49.883336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.322 [2024-12-06 17:45:49.915766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.322 [2024-12-06 17:45:49.915796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.322 [2024-12-06 17:45:49.915801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.322 [2024-12-06 17:45:49.915806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.322 [2024-12-06 17:45:49.915810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.322 [2024-12-06 17:45:49.917072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.322 [2024-12-06 17:45:49.917230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.322 [2024-12-06 17:45:49.917342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.322 [2024-12-06 17:45:49.917351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 [2024-12-06 17:45:50.615942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 Malloc0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 [2024-12-06 17:45:50.664830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:02.891 test case1: single bdev can't be used in multiple subsystems 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 [2024-12-06 17:45:50.688707] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:02.891 [2024-12-06 17:45:50.688723] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:02.891 [2024-12-06 17:45:50.688729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.891 request: 00:09:02.891 { 00:09:02.891 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:02.891 "namespace": { 00:09:02.891 "bdev_name": "Malloc0", 00:09:02.891 "no_auto_visible": false, 00:09:02.891 "hide_metadata": false 00:09:02.891 }, 00:09:02.891 "method": "nvmf_subsystem_add_ns", 00:09:02.891 "req_id": 1 00:09:02.891 } 00:09:02.891 Got JSON-RPC error response 00:09:02.891 response: 00:09:02.891 { 00:09:02.891 "code": -32602, 00:09:02.891 "message": "Invalid parameters" 00:09:02.891 } 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:02.891 Adding namespace failed - expected result. 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:02.891 test case2: host connect to nvmf target in multiple paths 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:02.891 [2024-12-06 17:45:50.696820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.891 17:45:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:04.797 17:45:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:06.176 17:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.176 17:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:06.176 17:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.176 17:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:06.176 17:45:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:08.081 17:45:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:08.081 [global] 00:09:08.081 thread=1 00:09:08.081 invalidate=1 00:09:08.081 rw=write 00:09:08.081 time_based=1 00:09:08.081 runtime=1 00:09:08.081 ioengine=libaio 00:09:08.081 direct=1 00:09:08.081 bs=4096 00:09:08.081 iodepth=1 00:09:08.081 norandommap=0 00:09:08.081 numjobs=1 00:09:08.081 00:09:08.081 verify_dump=1 00:09:08.081 verify_backlog=512 00:09:08.081 verify_state_save=0 00:09:08.081 do_verify=1 00:09:08.081 verify=crc32c-intel 00:09:08.081 [job0] 00:09:08.081 filename=/dev/nvme0n1 00:09:08.081 Could not set queue depth (nvme0n1) 00:09:08.341 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.341 fio-3.35 00:09:08.341 Starting 1 thread 00:09:09.719 00:09:09.719 job0: (groupid=0, jobs=1): err= 0: pid=2866127: Fri Dec 6 17:45:57 2024 00:09:09.719 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:09.719 slat (nsec): min=2942, max=27803, avg=15188.51, stdev=5065.61 00:09:09.719 clat (usec): min=510, max=1046, avg=846.85, stdev=75.68 00:09:09.719 lat (usec): min=513, max=1062, avg=862.04, stdev=75.83 00:09:09.719 clat percentiles (usec): 00:09:09.719 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 758], 20.00th=[ 791], 00:09:09.719 | 30.00th=[ 816], 40.00th=[ 840], 50.00th=[ 857], 60.00th=[ 873], 00:09:09.719 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 938], 95.00th=[ 955], 00:09:09.719 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1045], 99.95th=[ 1045], 00:09:09.719 | 99.99th=[ 1045] 00:09:09.719 write: IOPS=838, BW=3353KiB/s (3433kB/s)(3356KiB/1001msec); 0 zone resets 00:09:09.719 slat (usec): min=3, max=28668, avg=53.05, stdev=989.15 00:09:09.719 clat (usec): min=339, max=919, avg=605.72, stdev=104.27 00:09:09.719 lat (usec): min=346, max=29438, avg=658.77, stdev=1000.66 00:09:09.719 clat percentiles (usec): 00:09:09.719 | 1.00th=[ 375], 5.00th=[ 408], 10.00th=[ 453], 20.00th=[ 515], 00:09:09.719 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:09:09.719 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 758], 00:09:09.719 | 99.00th=[ 791], 99.50th=[ 824], 99.90th=[ 922], 99.95th=[ 922], 00:09:09.719 | 99.99th=[ 922] 00:09:09.719 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.719 lat (usec) : 500=11.03%, 750=50.48%, 1000=38.12% 00:09:09.719 lat (msec) : 2=0.37% 00:09:09.719 cpu : usr=1.70%, sys=3.50%, ctx=1354, majf=0, minf=1 00:09:09.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.719 issued rwts: total=512,839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.719 00:09:09.719 Run status group 0 (all jobs): 00:09:09.719 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:09.719 WRITE: bw=3353KiB/s (3433kB/s), 3353KiB/s-3353KiB/s (3433kB/s-3433kB/s), io=3356KiB (3437kB), run=1001-1001msec 00:09:09.719 00:09:09.719 Disk stats (read/write): 00:09:09.719 nvme0n1: ios=537/678, merge=0/0, ticks=1342/352, in_queue=1694, util=98.90% 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.719 rmmod nvme_tcp 00:09:09.719 rmmod nvme_fabrics 00:09:09.719 rmmod nvme_keyring 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2864584 ']' 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2864584 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2864584 ']' 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2864584 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864584 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:09.719 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864584' 00:09:09.719 killing process with pid 2864584 00:09:09.720 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2864584 00:09:09.720 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2864584 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.978 17:45:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:11.884 00:09:11.884 real 0m15.582s 00:09:11.884 user 0m44.324s 00:09:11.884 sys 0m4.915s 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:11.884 ************************************ 00:09:11.884 END TEST nvmf_nmic 00:09:11.884 ************************************ 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.884 17:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:12.145 ************************************ 00:09:12.145 START TEST nvmf_fio_target 00:09:12.145 ************************************ 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:12.145 * Looking for test storage... 00:09:12.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:12.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.145 --rc genhtml_branch_coverage=1 00:09:12.145 --rc genhtml_function_coverage=1 00:09:12.145 --rc genhtml_legend=1 00:09:12.145 --rc geninfo_all_blocks=1 00:09:12.145 --rc geninfo_unexecuted_blocks=1 00:09:12.145 00:09:12.145 ' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:12.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.145 --rc genhtml_branch_coverage=1 00:09:12.145 --rc genhtml_function_coverage=1 00:09:12.145 --rc genhtml_legend=1 00:09:12.145 --rc geninfo_all_blocks=1 00:09:12.145 --rc geninfo_unexecuted_blocks=1 00:09:12.145 00:09:12.145 ' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:12.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.145 --rc genhtml_branch_coverage=1 00:09:12.145 --rc genhtml_function_coverage=1 00:09:12.145 --rc genhtml_legend=1 00:09:12.145 --rc geninfo_all_blocks=1 00:09:12.145 --rc geninfo_unexecuted_blocks=1 00:09:12.145 00:09:12.145 ' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:12.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.145 --rc genhtml_branch_coverage=1 00:09:12.145 --rc genhtml_function_coverage=1 00:09:12.145 --rc genhtml_legend=1 00:09:12.145 --rc geninfo_all_blocks=1 00:09:12.145 --rc geninfo_unexecuted_blocks=1 00:09:12.145 00:09:12.145 ' 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.145 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:12.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:12.146 17:45:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.502 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:17.503 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:17.503 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:17.503 Found net devices under 0000:31:00.0: cvl_0_0 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:17.503 Found net devices under 0000:31:00.1: cvl_0_1 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.503 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:09:17.761 00:09:17.761 --- 10.0.0.2 ping statistics --- 00:09:17.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.761 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:09:17.761 00:09:17.761 --- 10.0.0.1 ping statistics --- 00:09:17.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.761 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.761 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2870818 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2870818 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2870818 ']' 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.762 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:17.762 [2024-12-06 17:46:05.432303] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:09:17.762 [2024-12-06 17:46:05.432351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.762 [2024-12-06 17:46:05.504241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.762 [2024-12-06 17:46:05.534353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.762 [2024-12-06 17:46:05.534379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.762 [2024-12-06 17:46:05.534385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.762 [2024-12-06 17:46:05.534390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.762 [2024-12-06 17:46:05.534394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.762 [2024-12-06 17:46:05.535510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.762 [2024-12-06 17:46:05.535634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.762 [2024-12-06 17:46:05.535794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.762 [2024-12-06 17:46:05.535797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.020 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:18.021 [2024-12-06 17:46:05.780823] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.021 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.279 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:18.279 17:46:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.538 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:18.538 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.538 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:18.538 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:18.797 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:18.797 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:19.055 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.055 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:19.055 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.314 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:19.314 17:46:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:19.572 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:19.572 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:19.572 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:19.831 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:19.831 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.831 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:19.831 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.089 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.346 [2024-12-06 17:46:07.927122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.346 17:46:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:20.346 17:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:20.605 17:46:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:21.983 17:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:21.983 17:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:21.984 17:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:21.984 17:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:21.984 17:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:21.984 17:46:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:24.522 17:46:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:24.522 [global] 00:09:24.522 thread=1 00:09:24.522 invalidate=1 00:09:24.522 rw=write 00:09:24.522 time_based=1 00:09:24.522 runtime=1 00:09:24.522 ioengine=libaio 00:09:24.522 direct=1 00:09:24.522 bs=4096 00:09:24.522 iodepth=1 00:09:24.522 norandommap=0 00:09:24.522 numjobs=1 00:09:24.522 00:09:24.522 verify_dump=1 00:09:24.522 verify_backlog=512 00:09:24.522 verify_state_save=0 00:09:24.522 do_verify=1 00:09:24.522 verify=crc32c-intel 00:09:24.522 [job0] 00:09:24.522 filename=/dev/nvme0n1 00:09:24.522 [job1] 00:09:24.522 filename=/dev/nvme0n2 00:09:24.522 [job2] 00:09:24.522 filename=/dev/nvme0n3 00:09:24.522 [job3] 00:09:24.522 filename=/dev/nvme0n4 00:09:24.522 Could not set queue depth (nvme0n1) 00:09:24.522 Could not set queue depth (nvme0n2) 00:09:24.522 Could not set queue depth (nvme0n3) 00:09:24.522 Could not set queue depth (nvme0n4) 00:09:24.522 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.522 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.522 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.522 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:24.522 fio-3.35 00:09:24.522 Starting 4 threads 00:09:25.911 00:09:25.911 job0: (groupid=0, jobs=1): err= 0: pid=2872451: Fri Dec 6 17:46:13 2024 00:09:25.911 read: IOPS=20, BW=81.0KiB/s (82.9kB/s)(84.0KiB/1037msec) 00:09:25.911 slat (nsec): min=4851, max=10045, avg=8795.52, stdev=1403.68 00:09:25.911 clat (usec): min=699, max=42172, avg=37684.03, stdev=12291.06 00:09:25.911 lat (usec): min=704, max=42181, avg=37692.82, stdev=12291.65 00:09:25.911 clat percentiles (usec): 00:09:25.911 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[40633], 20.00th=[40633], 00:09:25.911 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:09:25.911 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:25.911 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:25.911 | 99.99th=[42206] 00:09:25.911 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:25.911 slat (nsec): min=4021, max=54556, avg=8346.97, stdev=5692.10 00:09:25.911 clat (usec): min=166, max=960, avg=469.21, stdev=137.66 00:09:25.911 lat (usec): min=171, max=975, avg=477.56, stdev=140.71 00:09:25.911 clat percentiles (usec): 00:09:25.911 | 1.00th=[ 200], 5.00th=[ 249], 10.00th=[ 297], 20.00th=[ 355], 00:09:25.911 | 30.00th=[ 404], 40.00th=[ 433], 50.00th=[ 453], 60.00th=[ 490], 00:09:25.911 | 70.00th=[ 523], 80.00th=[ 570], 90.00th=[ 668], 95.00th=[ 717], 00:09:25.911 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 963], 99.95th=[ 963], 00:09:25.911 | 99.99th=[ 963] 00:09:25.911 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.911 lat (usec) : 250=4.88%, 500=55.53%, 750=32.46%, 1000=3.56% 00:09:25.911 lat (msec) : 50=3.56% 00:09:25.911 cpu : usr=0.39%, sys=0.10%, ctx=535, majf=0, minf=1 00:09:25.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.911 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.911 job1: (groupid=0, jobs=1): err= 0: pid=2872477: Fri Dec 6 17:46:13 2024 00:09:25.911 read: IOPS=19, BW=77.3KiB/s (79.1kB/s)(80.0KiB/1035msec) 00:09:25.911 slat (nsec): min=9372, max=27905, avg=24461.60, stdev=5942.96 00:09:25.911 clat (usec): min=1049, max=43061, avg=39896.38, stdev=9155.17 00:09:25.911 lat (usec): min=1058, max=43088, avg=39920.84, stdev=9158.76 00:09:25.911 clat percentiles (usec): 00:09:25.911 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:09:25.911 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:25.911 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:09:25.911 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:25.911 | 99.99th=[43254] 00:09:25.911 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:09:25.911 slat (nsec): min=2682, max=47592, avg=8201.29, stdev=5303.22 00:09:25.911 clat (usec): min=157, max=876, avg=450.20, stdev=133.74 00:09:25.911 lat (usec): min=171, max=888, avg=458.40, stdev=135.35 00:09:25.911 clat percentiles (usec): 00:09:25.911 | 1.00th=[ 208], 5.00th=[ 251], 10.00th=[ 289], 20.00th=[ 326], 00:09:25.911 | 30.00th=[ 367], 40.00th=[ 404], 50.00th=[ 437], 60.00th=[ 478], 00:09:25.911 | 70.00th=[ 519], 80.00th=[ 562], 90.00th=[ 635], 95.00th=[ 685], 00:09:25.911 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 881], 99.95th=[ 881], 00:09:25.911 | 99.99th=[ 881] 00:09:25.911 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.911 lat (usec) : 250=4.70%, 500=58.46%, 750=31.20%, 1000=1.88% 00:09:25.911 lat (msec) : 2=0.19%, 50=3.57% 00:09:25.911 cpu : usr=0.19%, sys=0.77%, ctx=535, majf=0, minf=1 00:09:25.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.911 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.911 job2: (groupid=0, jobs=1): err= 0: pid=2872528: Fri Dec 6 17:46:13 2024 00:09:25.911 read: IOPS=18, BW=74.9KiB/s (76.7kB/s)(76.0KiB/1015msec) 00:09:25.911 slat (nsec): min=11147, max=30760, avg=26973.79, stdev=3900.32 00:09:25.912 clat (usec): min=40839, max=42059, avg=41605.96, stdev=497.84 00:09:25.912 lat (usec): min=40867, max=42087, avg=41632.93, stdev=498.97 00:09:25.912 clat percentiles (usec): 00:09:25.912 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:25.912 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:25.912 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:25.912 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:25.912 | 99.99th=[42206] 00:09:25.912 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:09:25.912 slat (nsec): min=9913, max=62747, avg=29970.67, stdev=11366.95 00:09:25.912 clat (usec): min=142, max=625, avg=401.19, stdev=78.04 00:09:25.912 lat (usec): min=177, max=660, avg=431.16, stdev=81.98 00:09:25.912 clat percentiles (usec): 00:09:25.912 | 1.00th=[ 190], 5.00th=[ 277], 10.00th=[ 306], 20.00th=[ 326], 00:09:25.912 | 30.00th=[ 355], 40.00th=[ 392], 50.00th=[ 412], 60.00th=[ 429], 00:09:25.912 | 70.00th=[ 449], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 523], 00:09:25.912 | 99.00th=[ 562], 99.50th=[ 611], 99.90th=[ 627], 99.95th=[ 627], 00:09:25.912 | 99.99th=[ 627] 00:09:25.912 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.912 lat (usec) : 250=3.58%, 500=84.75%, 750=8.10% 00:09:25.912 lat (msec) : 50=3.58% 00:09:25.912 cpu : usr=0.49%, sys=1.68%, ctx=532, majf=0, minf=1 00:09:25.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.912 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.912 job3: (groupid=0, jobs=1): err= 0: pid=2872547: Fri Dec 6 17:46:13 2024 00:09:25.912 read: IOPS=933, BW=3732KiB/s (3822kB/s)(3736KiB/1001msec) 00:09:25.912 slat (nsec): min=2562, max=57605, avg=14089.53, stdev=8182.96 00:09:25.912 clat (usec): min=370, max=917, avg=655.28, stdev=91.46 00:09:25.912 lat (usec): min=383, max=945, avg=669.37, stdev=95.66 00:09:25.912 clat percentiles (usec): 00:09:25.912 | 1.00th=[ 445], 5.00th=[ 510], 10.00th=[ 529], 20.00th=[ 570], 00:09:25.912 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 685], 00:09:25.912 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 766], 95.00th=[ 799], 00:09:25.912 | 99.00th=[ 865], 99.50th=[ 865], 99.90th=[ 922], 99.95th=[ 922], 00:09:25.912 | 99.99th=[ 922] 00:09:25.912 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:25.912 slat (nsec): min=3573, max=65895, avg=15467.40, stdev=7714.65 00:09:25.912 clat (usec): min=117, max=781, avg=343.71, stdev=132.82 00:09:25.912 lat (usec): min=128, max=797, avg=359.17, stdev=134.56 00:09:25.912 clat percentiles (usec): 00:09:25.912 | 1.00th=[ 125], 5.00th=[ 169], 10.00th=[ 192], 20.00th=[ 223], 00:09:25.912 | 30.00th=[ 251], 40.00th=[ 285], 50.00th=[ 322], 60.00th=[ 359], 00:09:25.912 | 70.00th=[ 404], 80.00th=[ 469], 90.00th=[ 529], 95.00th=[ 586], 00:09:25.912 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 750], 99.95th=[ 783], 00:09:25.912 | 99.99th=[ 783] 00:09:25.912 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:09:25.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:25.912 lat (usec) : 250=15.68%, 500=30.64%, 750=46.02%, 1000=7.66% 00:09:25.912 cpu : usr=3.00%, sys=3.60%, ctx=1959, majf=0, minf=1 00:09:25.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:25.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.912 issued rwts: total=934,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:25.912 00:09:25.912 Run status group 0 (all jobs): 00:09:25.912 READ: bw=3834KiB/s (3926kB/s), 74.9KiB/s-3732KiB/s (76.7kB/s-3822kB/s), io=3976KiB (4071kB), run=1001-1037msec 00:09:25.912 WRITE: bw=9875KiB/s (10.1MB/s), 1975KiB/s-4092KiB/s (2022kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1037msec 00:09:25.912 00:09:25.912 Disk stats (read/write): 00:09:25.912 nvme0n1: ios=52/512, merge=0/0, ticks=1982/238, in_queue=2220, util=98.96% 00:09:25.912 nvme0n2: ios=48/512, merge=0/0, ticks=1974/228, in_queue=2202, util=100.00% 00:09:25.912 nvme0n3: ios=34/512, merge=0/0, ticks=1421/194, in_queue=1615, util=100.00% 00:09:25.912 nvme0n4: ios=634/1024, merge=0/0, ticks=1240/238, in_queue=1478, util=100.00% 00:09:25.912 17:46:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:25.912 [global] 00:09:25.912 thread=1 00:09:25.912 invalidate=1 00:09:25.912 rw=randwrite 00:09:25.912 time_based=1 00:09:25.912 runtime=1 00:09:25.912 ioengine=libaio 00:09:25.912 direct=1 00:09:25.912 bs=4096 00:09:25.912 iodepth=1 00:09:25.912 norandommap=0 00:09:25.912 numjobs=1 00:09:25.912 00:09:25.912 verify_dump=1 00:09:25.912 verify_backlog=512 00:09:25.912 verify_state_save=0 00:09:25.912 do_verify=1 00:09:25.912 verify=crc32c-intel 00:09:25.912 [job0] 00:09:25.912 filename=/dev/nvme0n1 00:09:25.912 [job1] 00:09:25.912 filename=/dev/nvme0n2 00:09:25.912 [job2] 00:09:25.912 filename=/dev/nvme0n3 00:09:25.912 [job3] 00:09:25.912 filename=/dev/nvme0n4 00:09:25.912 Could not set queue depth (nvme0n1) 00:09:25.912 Could not set queue depth (nvme0n2) 00:09:25.912 Could not set queue depth (nvme0n3) 00:09:25.912 Could not set queue depth (nvme0n4) 00:09:26.172 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.172 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.172 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.172 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:26.172 fio-3.35 00:09:26.172 Starting 4 threads 00:09:27.553 00:09:27.553 job0: (groupid=0, jobs=1): err= 0: pid=2873040: Fri Dec 6 17:46:14 2024 00:09:27.553 read: IOPS=606, BW=2426KiB/s (2484kB/s)(2428KiB/1001msec) 00:09:27.553 slat (nsec): min=2822, max=47541, avg=19630.70, stdev=8864.04 00:09:27.553 clat (usec): min=237, max=1486, avg=841.70, stdev=158.36 00:09:27.553 lat (usec): min=250, max=1499, avg=861.33, stdev=159.38 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 523], 5.00th=[ 594], 10.00th=[ 635], 20.00th=[ 709], 00:09:27.553 | 30.00th=[ 766], 40.00th=[ 799], 50.00th=[ 848], 60.00th=[ 881], 00:09:27.553 | 70.00th=[ 922], 80.00th=[ 963], 90.00th=[ 1020], 95.00th=[ 1090], 00:09:27.553 | 99.00th=[ 1254], 99.50th=[ 1336], 99.90th=[ 1483], 99.95th=[ 1483], 00:09:27.553 | 99.99th=[ 1483] 00:09:27.553 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:27.553 slat (nsec): min=3469, max=68823, avg=15475.03, stdev=11653.08 00:09:27.553 clat (usec): min=164, max=955, avg=443.26, stdev=138.29 00:09:27.553 lat (usec): min=179, max=983, avg=458.74, stdev=141.83 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 206], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 310], 00:09:27.553 | 30.00th=[ 338], 40.00th=[ 388], 50.00th=[ 433], 60.00th=[ 474], 00:09:27.553 | 70.00th=[ 515], 80.00th=[ 570], 90.00th=[ 635], 95.00th=[ 685], 00:09:27.553 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 930], 99.95th=[ 955], 00:09:27.553 | 99.99th=[ 955] 00:09:27.553 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.553 lat (usec) : 250=2.33%, 500=40.16%, 750=29.06%, 1000=23.85% 00:09:27.553 lat (msec) : 2=4.60% 00:09:27.553 cpu : usr=2.10%, sys=3.50%, ctx=1632, majf=0, minf=1 00:09:27.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 issued rwts: total=607,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.553 job1: (groupid=0, jobs=1): err= 0: pid=2873051: Fri Dec 6 17:46:14 2024 00:09:27.553 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:09:27.553 slat (nsec): min=11388, max=27456, avg=25166.00, stdev=4338.17 00:09:27.553 clat (usec): min=467, max=42508, avg=34678.84, stdev=15950.47 00:09:27.553 lat (usec): min=494, max=42519, avg=34704.01, stdev=15951.59 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 469], 5.00th=[ 594], 10.00th=[ 832], 20.00th=[41157], 00:09:27.553 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:09:27.553 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:27.553 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:27.553 | 99.99th=[42730] 00:09:27.553 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:09:27.553 slat (nsec): min=4000, max=45311, avg=13061.79, stdev=4124.31 00:09:27.553 clat (usec): min=122, max=783, avg=429.70, stdev=119.78 00:09:27.553 lat (usec): min=136, max=797, avg=442.76, stdev=120.89 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 212], 5.00th=[ 241], 10.00th=[ 273], 20.00th=[ 334], 00:09:27.553 | 30.00th=[ 359], 40.00th=[ 392], 50.00th=[ 420], 60.00th=[ 461], 00:09:27.553 | 70.00th=[ 490], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 652], 00:09:27.553 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 783], 99.95th=[ 783], 00:09:27.553 | 99.99th=[ 783] 00:09:27.553 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.553 lat (usec) : 250=6.36%, 500=62.80%, 750=26.36%, 1000=0.93% 00:09:27.553 lat (msec) : 50=3.55% 00:09:27.553 cpu : usr=0.49%, sys=0.39%, ctx=536, majf=0, minf=1 00:09:27.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.553 job2: (groupid=0, jobs=1): err= 0: pid=2873069: Fri Dec 6 17:46:14 2024 00:09:27.553 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:27.553 slat (nsec): min=3495, max=58592, avg=19223.67, stdev=7254.47 00:09:27.553 clat (usec): min=514, max=1193, avg=894.51, stdev=89.95 00:09:27.553 lat (usec): min=526, max=1204, avg=913.73, stdev=90.14 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 783], 20.00th=[ 824], 00:09:27.553 | 30.00th=[ 857], 40.00th=[ 881], 50.00th=[ 906], 60.00th=[ 922], 00:09:27.553 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1037], 00:09:27.553 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1188], 00:09:27.553 | 99.99th=[ 1188] 00:09:27.553 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:27.553 slat (nsec): min=4016, max=66135, avg=17166.00, stdev=9580.93 00:09:27.553 clat (usec): min=145, max=823, avg=494.83, stdev=135.53 00:09:27.553 lat (usec): min=150, max=844, avg=512.00, stdev=138.49 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 186], 5.00th=[ 269], 10.00th=[ 306], 20.00th=[ 383], 00:09:27.553 | 30.00th=[ 424], 40.00th=[ 461], 50.00th=[ 498], 60.00th=[ 529], 00:09:27.553 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 676], 95.00th=[ 717], 00:09:27.553 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 816], 99.95th=[ 824], 00:09:27.553 | 99.99th=[ 824] 00:09:27.553 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.553 lat (usec) : 250=2.54%, 500=31.05%, 750=33.66%, 1000=29.56% 00:09:27.553 lat (msec) : 2=3.19% 00:09:27.553 cpu : usr=1.10%, sys=2.90%, ctx=1537, majf=0, minf=1 00:09:27.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 issued rwts: total=512,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.553 job3: (groupid=0, jobs=1): err= 0: pid=2873076: Fri Dec 6 17:46:14 2024 00:09:27.553 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:09:27.553 slat (nsec): min=11714, max=27289, avg=25529.33, stdev=4530.19 00:09:27.553 clat (usec): min=701, max=42853, avg=34190.04, stdev=16532.49 00:09:27.553 lat (usec): min=713, max=42880, avg=34215.57, stdev=16533.62 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 701], 5.00th=[ 996], 10.00th=[ 1004], 20.00th=[41681], 00:09:27.553 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:27.553 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:27.553 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:27.553 | 99.99th=[42730] 00:09:27.553 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:09:27.553 slat (nsec): min=4100, max=41248, avg=13362.14, stdev=4313.60 00:09:27.553 clat (usec): min=123, max=924, avg=593.20, stdev=132.44 00:09:27.553 lat (usec): min=128, max=939, avg=606.56, stdev=133.69 00:09:27.553 clat percentiles (usec): 00:09:27.553 | 1.00th=[ 302], 5.00th=[ 383], 10.00th=[ 420], 20.00th=[ 490], 00:09:27.553 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 627], 00:09:27.553 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 807], 00:09:27.553 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 922], 99.95th=[ 922], 00:09:27.553 | 99.99th=[ 922] 00:09:27.553 bw ( KiB/s): min= 4096, max= 4096, per=34.40%, avg=4096.00, stdev= 0.00, samples=1 00:09:27.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:27.553 lat (usec) : 250=0.75%, 500=21.39%, 750=63.04%, 1000=11.26% 00:09:27.553 lat (msec) : 2=0.38%, 50=3.19% 00:09:27.553 cpu : usr=0.48%, sys=0.48%, ctx=535, majf=0, minf=1 00:09:27.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:27.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.553 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:27.553 00:09:27.553 Run status group 0 (all jobs): 00:09:27.553 READ: bw=4508KiB/s (4616kB/s), 81.4KiB/s-2426KiB/s (83.3kB/s-2484kB/s), io=4652KiB (4764kB), run=1001-1032msec 00:09:27.553 WRITE: bw=11.6MiB/s (12.2MB/s), 1984KiB/s-4092KiB/s (2032kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1032msec 00:09:27.553 00:09:27.553 Disk stats (read/write): 00:09:27.553 nvme0n1: ios=555/892, merge=0/0, ticks=413/343, in_queue=756, util=87.07% 00:09:27.553 nvme0n2: ios=71/512, merge=0/0, ticks=735/216, in_queue=951, util=90.57% 00:09:27.553 nvme0n3: ios=534/762, merge=0/0, ticks=1330/353, in_queue=1683, util=93.66% 00:09:27.553 nvme0n4: ios=68/512, merge=0/0, ticks=610/296, in_queue=906, util=95.39% 00:09:27.553 17:46:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:27.553 [global] 00:09:27.553 thread=1 00:09:27.553 invalidate=1 00:09:27.553 rw=write 00:09:27.553 time_based=1 00:09:27.553 runtime=1 00:09:27.553 ioengine=libaio 00:09:27.553 direct=1 00:09:27.553 bs=4096 00:09:27.553 iodepth=128 00:09:27.553 norandommap=0 00:09:27.553 numjobs=1 00:09:27.553 00:09:27.553 verify_dump=1 00:09:27.553 verify_backlog=512 00:09:27.553 verify_state_save=0 00:09:27.553 do_verify=1 00:09:27.553 verify=crc32c-intel 00:09:27.553 [job0] 00:09:27.553 filename=/dev/nvme0n1 00:09:27.553 [job1] 00:09:27.553 filename=/dev/nvme0n2 00:09:27.553 [job2] 00:09:27.553 filename=/dev/nvme0n3 00:09:27.553 [job3] 00:09:27.553 filename=/dev/nvme0n4 00:09:27.553 Could not set queue depth (nvme0n1) 00:09:27.553 Could not set queue depth (nvme0n2) 00:09:27.553 Could not set queue depth (nvme0n3) 00:09:27.553 Could not set queue depth (nvme0n4) 00:09:27.553 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.553 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.553 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.553 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:27.553 fio-3.35 00:09:27.553 Starting 4 threads 00:09:28.932 00:09:28.932 job0: (groupid=0, jobs=1): err= 0: pid=2873590: Fri Dec 6 17:46:16 2024 00:09:28.932 read: IOPS=5340, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1006msec) 00:09:28.932 slat (nsec): min=947, max=10478k, avg=96981.41, stdev=626869.01 00:09:28.932 clat (usec): min=2074, max=40582, avg=11918.18, stdev=4777.07 00:09:28.932 lat (usec): min=3048, max=40585, avg=12015.16, stdev=4830.80 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 7504], 00:09:28.932 | 30.00th=[ 9372], 40.00th=[10421], 50.00th=[11207], 60.00th=[12125], 00:09:28.932 | 70.00th=[13698], 80.00th=[15008], 90.00th=[17695], 95.00th=[19006], 00:09:28.932 | 99.00th=[28181], 99.50th=[38011], 99.90th=[39584], 99.95th=[40633], 00:09:28.932 | 99.99th=[40633] 00:09:28.932 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:09:28.932 slat (nsec): min=1639, max=9213.2k, avg=83102.26, stdev=502953.58 00:09:28.932 clat (usec): min=1267, max=40573, avg=11287.59, stdev=5600.01 00:09:28.932 lat (usec): min=1277, max=40575, avg=11370.69, stdev=5642.85 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 3425], 5.00th=[ 5342], 10.00th=[ 6521], 20.00th=[ 6915], 00:09:28.932 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9765], 60.00th=[10814], 00:09:28.932 | 70.00th=[11600], 80.00th=[13829], 90.00th=[21627], 95.00th=[23725], 00:09:28.932 | 99.00th=[28181], 99.50th=[28967], 99.90th=[34866], 99.95th=[34866], 00:09:28.932 | 99.99th=[40633] 00:09:28.932 bw ( KiB/s): min=19848, max=25208, per=22.59%, avg=22528.00, stdev=3790.09, samples=2 00:09:28.932 iops : min= 4962, max= 6302, avg=5632.00, stdev=947.52, samples=2 00:09:28.932 lat (msec) : 2=0.02%, 4=0.92%, 10=42.77%, 20=48.11%, 50=8.18% 00:09:28.932 cpu : usr=2.19%, sys=2.79%, ctx=535, majf=0, minf=1 00:09:28.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.932 issued rwts: total=5373,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.932 job1: (groupid=0, jobs=1): err= 0: pid=2873602: Fri Dec 6 17:46:16 2024 00:09:28.932 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:09:28.932 slat (nsec): min=965, max=9738.5k, avg=87028.82, stdev=601299.98 00:09:28.932 clat (usec): min=3556, max=27288, avg=12185.76, stdev=4296.07 00:09:28.932 lat (usec): min=3563, max=27456, avg=12272.79, stdev=4355.58 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 8160], 00:09:28.932 | 30.00th=[ 9765], 40.00th=[11076], 50.00th=[11600], 60.00th=[12387], 00:09:28.932 | 70.00th=[13829], 80.00th=[15795], 90.00th=[17957], 95.00th=[20841], 00:09:28.932 | 99.00th=[22938], 99.50th=[24249], 99.90th=[26084], 99.95th=[26346], 00:09:28.932 | 99.99th=[27395] 00:09:28.932 write: IOPS=4738, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1006msec); 0 zone resets 00:09:28.932 slat (nsec): min=1719, max=11861k, avg=104839.14, stdev=605930.93 00:09:28.932 clat (usec): min=246, max=61461, avg=15005.65, stdev=11360.69 00:09:28.932 lat (usec): min=281, max=61473, avg=15110.49, stdev=11429.00 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 1123], 5.00th=[ 2704], 10.00th=[ 3326], 20.00th=[ 6128], 00:09:28.932 | 30.00th=[ 7046], 40.00th=[10290], 50.00th=[12387], 60.00th=[13435], 00:09:28.932 | 70.00th=[17957], 80.00th=[23987], 90.00th=[29230], 95.00th=[37487], 00:09:28.932 | 99.00th=[54789], 99.50th=[56886], 99.90th=[61604], 99.95th=[61604], 00:09:28.932 | 99.99th=[61604] 00:09:28.932 bw ( KiB/s): min=14608, max=22504, per=18.61%, avg=18556.00, stdev=5583.32, samples=2 00:09:28.932 iops : min= 3652, max= 5626, avg=4639.00, stdev=1395.83, samples=2 00:09:28.932 lat (usec) : 250=0.01%, 500=0.02%, 750=0.04%, 1000=0.34% 00:09:28.932 lat (msec) : 2=0.91%, 4=5.35%, 10=27.62%, 20=48.86%, 50=15.61% 00:09:28.932 lat (msec) : 100=1.24% 00:09:28.932 cpu : usr=2.99%, sys=4.78%, ctx=470, majf=0, minf=2 00:09:28.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.932 issued rwts: total=4608,4767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.932 job2: (groupid=0, jobs=1): err= 0: pid=2873629: Fri Dec 6 17:46:16 2024 00:09:28.932 read: IOPS=7083, BW=27.7MiB/s (29.0MB/s)(27.7MiB/1002msec) 00:09:28.932 slat (nsec): min=970, max=12737k, avg=76438.31, stdev=565928.31 00:09:28.932 clat (usec): min=1125, max=61878, avg=9288.31, stdev=4426.51 00:09:28.932 lat (usec): min=3279, max=61887, avg=9364.75, stdev=4485.20 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 4424], 5.00th=[ 6325], 10.00th=[ 7504], 20.00th=[ 7832], 00:09:28.932 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:09:28.932 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[11469], 95.00th=[13829], 00:09:28.932 | 99.00th=[32113], 99.50th=[41681], 99.90th=[57934], 99.95th=[61604], 00:09:28.932 | 99.99th=[62129] 00:09:28.932 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:09:28.932 slat (nsec): min=1769, max=7496.2k, avg=59088.32, stdev=412523.06 00:09:28.932 clat (usec): min=619, max=61841, avg=8533.09, stdev=4569.76 00:09:28.932 lat (usec): min=654, max=61844, avg=8592.18, stdev=4589.23 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 3392], 5.00th=[ 4817], 10.00th=[ 5669], 20.00th=[ 7439], 00:09:28.932 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8225], 00:09:28.932 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9110], 95.00th=[12125], 00:09:28.932 | 99.00th=[36439], 99.50th=[44827], 99.90th=[51643], 99.95th=[51643], 00:09:28.932 | 99.99th=[61604] 00:09:28.932 bw ( KiB/s): min=26168, max=31176, per=28.76%, avg=28672.00, stdev=3541.19, samples=2 00:09:28.932 iops : min= 6542, max= 7794, avg=7168.00, stdev=885.30, samples=2 00:09:28.932 lat (usec) : 750=0.03%, 1000=0.03% 00:09:28.932 lat (msec) : 2=0.25%, 4=0.60%, 10=87.38%, 20=10.06%, 50=1.41% 00:09:28.932 lat (msec) : 100=0.26% 00:09:28.932 cpu : usr=3.20%, sys=6.19%, ctx=514, majf=0, minf=1 00:09:28.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.932 issued rwts: total=7098,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.932 job3: (groupid=0, jobs=1): err= 0: pid=2873637: Fri Dec 6 17:46:16 2024 00:09:28.932 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec) 00:09:28.932 slat (nsec): min=1000, max=7349.5k, avg=72680.33, stdev=492438.02 00:09:28.932 clat (usec): min=2524, max=25705, avg=8838.87, stdev=2445.57 00:09:28.932 lat (usec): min=2531, max=25709, avg=8911.55, stdev=2483.35 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 4883], 5.00th=[ 6325], 10.00th=[ 7046], 20.00th=[ 7570], 00:09:28.932 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:28.932 | 70.00th=[ 8586], 80.00th=[10159], 90.00th=[11994], 95.00th=[14353], 00:09:28.932 | 99.00th=[17171], 99.50th=[19006], 99.90th=[25035], 99.95th=[25822], 00:09:28.932 | 99.99th=[25822] 00:09:28.932 write: IOPS=7482, BW=29.2MiB/s (30.6MB/s)(29.4MiB/1007msec); 0 zone resets 00:09:28.932 slat (nsec): min=1776, max=16154k, avg=56776.64, stdev=324903.10 00:09:28.932 clat (usec): min=1090, max=49502, avg=8289.52, stdev=4952.92 00:09:28.932 lat (usec): min=1100, max=49513, avg=8346.29, stdev=4980.34 00:09:28.932 clat percentiles (usec): 00:09:28.932 | 1.00th=[ 2769], 5.00th=[ 4228], 10.00th=[ 5407], 20.00th=[ 7242], 00:09:28.932 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:09:28.932 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10421], 00:09:28.932 | 99.00th=[44303], 99.50th=[45876], 99.90th=[49021], 99.95th=[49546], 00:09:28.932 | 99.99th=[49546] 00:09:28.932 bw ( KiB/s): min=27176, max=32088, per=29.72%, avg=29632.00, stdev=3473.31, samples=2 00:09:28.932 iops : min= 6794, max= 8022, avg=7408.00, stdev=868.33, samples=2 00:09:28.932 lat (msec) : 2=0.26%, 4=2.29%, 10=84.05%, 20=12.19%, 50=1.20% 00:09:28.932 cpu : usr=4.47%, sys=8.45%, ctx=906, majf=0, minf=2 00:09:28.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.932 issued rwts: total=7168,7535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.932 00:09:28.932 Run status group 0 (all jobs): 00:09:28.932 READ: bw=94.1MiB/s (98.6MB/s), 17.9MiB/s-27.8MiB/s (18.8MB/s-29.2MB/s), io=94.7MiB (99.3MB), run=1002-1007msec 00:09:28.932 WRITE: bw=97.4MiB/s (102MB/s), 18.5MiB/s-29.2MiB/s (19.4MB/s-30.6MB/s), io=98.1MiB (103MB), run=1002-1007msec 00:09:28.932 00:09:28.932 Disk stats (read/write): 00:09:28.932 nvme0n1: ios=4658/4959, merge=0/0, ticks=36514/37136, in_queue=73650, util=87.58% 00:09:28.932 nvme0n2: ios=3820/4096, merge=0/0, ticks=32901/48636, in_queue=81537, util=89.40% 00:09:28.932 nvme0n3: ios=5689/6039, merge=0/0, ticks=43088/38191, in_queue=81279, util=92.93% 00:09:28.932 nvme0n4: ios=5814/6144, merge=0/0, ticks=36281/35650, in_queue=71931, util=92.84% 00:09:28.932 17:46:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:28.932 [global] 00:09:28.932 thread=1 00:09:28.932 invalidate=1 00:09:28.932 rw=randwrite 00:09:28.932 time_based=1 00:09:28.932 runtime=1 00:09:28.932 ioengine=libaio 00:09:28.932 direct=1 00:09:28.932 bs=4096 00:09:28.932 iodepth=128 00:09:28.932 norandommap=0 00:09:28.932 numjobs=1 00:09:28.932 00:09:28.933 verify_dump=1 00:09:28.933 verify_backlog=512 00:09:28.933 verify_state_save=0 00:09:28.933 do_verify=1 00:09:28.933 verify=crc32c-intel 00:09:28.933 [job0] 00:09:28.933 filename=/dev/nvme0n1 00:09:28.933 [job1] 00:09:28.933 filename=/dev/nvme0n2 00:09:28.933 [job2] 00:09:28.933 filename=/dev/nvme0n3 00:09:28.933 [job3] 00:09:28.933 filename=/dev/nvme0n4 00:09:28.933 Could not set queue depth (nvme0n1) 00:09:28.933 Could not set queue depth (nvme0n2) 00:09:28.933 Could not set queue depth (nvme0n3) 00:09:28.933 Could not set queue depth (nvme0n4) 00:09:29.191 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.191 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.191 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.191 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:29.191 fio-3.35 00:09:29.191 Starting 4 threads 00:09:30.579 00:09:30.579 job0: (groupid=0, jobs=1): err= 0: pid=2874112: Fri Dec 6 17:46:18 2024 00:09:30.579 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:09:30.579 slat (nsec): min=944, max=16916k, avg=84087.56, stdev=728188.36 00:09:30.579 clat (usec): min=2533, max=35921, avg=10828.02, stdev=5178.38 00:09:30.579 lat (usec): min=2541, max=35947, avg=10912.11, stdev=5233.64 00:09:30.579 clat percentiles (usec): 00:09:30.579 | 1.00th=[ 4883], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6783], 00:09:30.579 | 30.00th=[ 7111], 40.00th=[ 7767], 50.00th=[ 8848], 60.00th=[10290], 00:09:30.579 | 70.00th=[12387], 80.00th=[15401], 90.00th=[19268], 95.00th=[21627], 00:09:30.579 | 99.00th=[26346], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:09:30.579 | 99.99th=[35914] 00:09:30.579 write: IOPS=6026, BW=23.5MiB/s (24.7MB/s)(23.7MiB/1006msec); 0 zone resets 00:09:30.579 slat (nsec): min=1599, max=13584k, avg=77902.17, stdev=515505.03 00:09:30.579 clat (usec): min=488, max=83347, avg=10957.44, stdev=12899.39 00:09:30.579 lat (usec): min=497, max=83354, avg=11035.34, stdev=12992.48 00:09:30.579 clat percentiles (usec): 00:09:30.579 | 1.00th=[ 2343], 5.00th=[ 4015], 10.00th=[ 4752], 20.00th=[ 6063], 00:09:30.579 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7570], 00:09:30.579 | 70.00th=[ 8717], 80.00th=[11994], 90.00th=[15008], 95.00th=[30278], 00:09:30.579 | 99.00th=[73925], 99.50th=[77071], 99.90th=[83362], 99.95th=[83362], 00:09:30.579 | 99.99th=[83362] 00:09:30.579 bw ( KiB/s): min=16384, max=31096, per=23.25%, avg=23740.00, stdev=10402.95, samples=2 00:09:30.579 iops : min= 4096, max= 7774, avg=5935.00, stdev=2600.74, samples=2 00:09:30.579 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.09% 00:09:30.579 lat (msec) : 2=0.15%, 4=2.67%, 10=63.88%, 20=26.02%, 50=5.18% 00:09:30.579 lat (msec) : 100=1.98% 00:09:30.579 cpu : usr=3.18%, sys=5.87%, ctx=509, majf=0, minf=1 00:09:30.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.579 issued rwts: total=5632,6063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.579 job1: (groupid=0, jobs=1): err= 0: pid=2874123: Fri Dec 6 17:46:18 2024 00:09:30.579 read: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec) 00:09:30.579 slat (nsec): min=889, max=12062k, avg=70592.40, stdev=530606.06 00:09:30.579 clat (usec): min=1055, max=56431, avg=9127.34, stdev=5476.64 00:09:30.579 lat (usec): min=1082, max=58560, avg=9197.93, stdev=5519.40 00:09:30.579 clat percentiles (usec): 00:09:30.579 | 1.00th=[ 3523], 5.00th=[ 5407], 10.00th=[ 6325], 20.00th=[ 7111], 00:09:30.579 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:09:30.579 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[12256], 95.00th=[16057], 00:09:30.579 | 99.00th=[35390], 99.50th=[47449], 99.90th=[56361], 99.95th=[56361], 00:09:30.579 | 99.99th=[56361] 00:09:30.579 write: IOPS=7316, BW=28.6MiB/s (30.0MB/s)(28.8MiB/1008msec); 0 zone resets 00:09:30.579 slat (nsec): min=1496, max=9131.7k, avg=62848.77, stdev=406500.20 00:09:30.579 clat (usec): min=1111, max=75420, avg=8470.64, stdev=7928.82 00:09:30.579 lat (usec): min=1121, max=75427, avg=8533.49, stdev=7988.18 00:09:30.579 clat percentiles (usec): 00:09:30.579 | 1.00th=[ 2671], 5.00th=[ 4359], 10.00th=[ 5342], 20.00th=[ 6521], 00:09:30.579 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7504], 00:09:30.579 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 9634], 95.00th=[11600], 00:09:30.579 | 99.00th=[59507], 99.50th=[66847], 99.90th=[69731], 99.95th=[74974], 00:09:30.579 | 99.99th=[74974] 00:09:30.580 bw ( KiB/s): min=25208, max=32768, per=28.40%, avg=28988.00, stdev=5345.73, samples=2 00:09:30.580 iops : min= 6302, max= 8192, avg=7247.00, stdev=1336.43, samples=2 00:09:30.580 lat (msec) : 2=0.46%, 4=2.24%, 10=83.72%, 20=10.74%, 50=1.85% 00:09:30.580 lat (msec) : 100=0.99% 00:09:30.580 cpu : usr=4.07%, sys=6.06%, ctx=845, majf=0, minf=3 00:09:30.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.580 issued rwts: total=7168,7375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.580 job2: (groupid=0, jobs=1): err= 0: pid=2874141: Fri Dec 6 17:46:18 2024 00:09:30.580 read: IOPS=5825, BW=22.8MiB/s (23.9MB/s)(22.8MiB/1003msec) 00:09:30.580 slat (nsec): min=947, max=13739k, avg=78398.43, stdev=617895.99 00:09:30.580 clat (usec): min=1132, max=28237, avg=10584.04, stdev=3785.53 00:09:30.580 lat (usec): min=3630, max=28260, avg=10662.44, stdev=3822.62 00:09:30.580 clat percentiles (usec): 00:09:30.580 | 1.00th=[ 5145], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8029], 00:09:30.580 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9896], 00:09:30.580 | 70.00th=[11207], 80.00th=[12125], 90.00th=[15795], 95.00th=[18220], 00:09:30.580 | 99.00th=[23200], 99.50th=[24773], 99.90th=[27395], 99.95th=[27657], 00:09:30.580 | 99.99th=[28181] 00:09:30.580 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:09:30.580 slat (nsec): min=1634, max=16688k, avg=78502.86, stdev=612154.93 00:09:30.580 clat (usec): min=1982, max=32004, avg=10603.71, stdev=4936.30 00:09:30.580 lat (usec): min=1986, max=33935, avg=10682.21, stdev=4981.87 00:09:30.580 clat percentiles (usec): 00:09:30.580 | 1.00th=[ 3982], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 7701], 00:09:30.580 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9110], 00:09:30.580 | 70.00th=[10421], 80.00th=[14353], 90.00th=[17171], 95.00th=[20579], 00:09:30.580 | 99.00th=[28967], 99.50th=[30540], 99.90th=[32113], 99.95th=[32113], 00:09:30.580 | 99.99th=[32113] 00:09:30.580 bw ( KiB/s): min=24576, max=24576, per=24.07%, avg=24576.00, stdev= 0.00, samples=2 00:09:30.580 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:09:30.580 lat (msec) : 2=0.06%, 4=0.73%, 10=63.68%, 20=31.36%, 50=4.17% 00:09:30.580 cpu : usr=4.09%, sys=4.09%, ctx=417, majf=0, minf=1 00:09:30.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.580 issued rwts: total=5843,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.580 job3: (groupid=0, jobs=1): err= 0: pid=2874148: Fri Dec 6 17:46:18 2024 00:09:30.580 read: IOPS=5683, BW=22.2MiB/s (23.3MB/s)(22.4MiB/1007msec) 00:09:30.580 slat (nsec): min=948, max=14122k, avg=70653.44, stdev=556345.53 00:09:30.580 clat (usec): min=2081, max=30819, avg=9866.41, stdev=3645.01 00:09:30.580 lat (usec): min=2108, max=30847, avg=9937.06, stdev=3676.90 00:09:30.580 clat percentiles (usec): 00:09:30.580 | 1.00th=[ 4621], 5.00th=[ 6521], 10.00th=[ 7635], 20.00th=[ 7963], 00:09:30.580 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8979], 00:09:30.580 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[15533], 95.00th=[17433], 00:09:30.580 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:09:30.580 | 99.99th=[30802] 00:09:30.580 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:09:30.580 slat (nsec): min=1552, max=13431k, avg=88869.10, stdev=624436.90 00:09:30.580 clat (usec): min=496, max=93579, avg=11601.08, stdev=13278.41 00:09:30.580 lat (usec): min=525, max=93586, avg=11689.95, stdev=13369.00 00:09:30.580 clat percentiles (usec): 00:09:30.580 | 1.00th=[ 2868], 5.00th=[ 5211], 10.00th=[ 6325], 20.00th=[ 7570], 00:09:30.580 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:09:30.580 | 70.00th=[ 8586], 80.00th=[10421], 90.00th=[13829], 95.00th=[28705], 00:09:30.580 | 99.00th=[81265], 99.50th=[85459], 99.90th=[93848], 99.95th=[93848], 00:09:30.580 | 99.99th=[93848] 00:09:30.580 bw ( KiB/s): min=20792, max=28064, per=23.93%, avg=24428.00, stdev=5142.08, samples=2 00:09:30.580 iops : min= 5198, max= 7016, avg=6107.00, stdev=1285.52, samples=2 00:09:30.580 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:09:30.580 lat (msec) : 2=0.18%, 4=0.83%, 10=74.62%, 20=19.18%, 50=3.34% 00:09:30.580 lat (msec) : 100=1.81% 00:09:30.580 cpu : usr=3.68%, sys=6.86%, ctx=417, majf=0, minf=2 00:09:30.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:30.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.580 issued rwts: total=5723,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.580 00:09:30.580 Run status group 0 (all jobs): 00:09:30.580 READ: bw=94.4MiB/s (99.0MB/s), 21.9MiB/s-27.8MiB/s (22.9MB/s-29.1MB/s), io=95.2MiB (99.8MB), run=1003-1008msec 00:09:30.580 WRITE: bw=99.7MiB/s (105MB/s), 23.5MiB/s-28.6MiB/s (24.7MB/s-30.0MB/s), io=100MiB (105MB), run=1003-1008msec 00:09:30.580 00:09:30.580 Disk stats (read/write): 00:09:30.580 nvme0n1: ios=4273/4608, merge=0/0, ticks=39276/46004, in_queue=85280, util=95.69% 00:09:30.580 nvme0n2: ios=6893/7168, merge=0/0, ticks=46382/42760, in_queue=89142, util=90.16% 00:09:30.580 nvme0n3: ios=4898/5120, merge=0/0, ticks=50860/52751, in_queue=103611, util=99.79% 00:09:30.580 nvme0n4: ios=4664/4975, merge=0/0, ticks=26826/42775, in_queue=69601, util=92.66% 00:09:30.580 17:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:30.580 17:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:30.580 17:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2874324 00:09:30.580 17:46:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:30.580 [global] 00:09:30.580 thread=1 00:09:30.580 invalidate=1 00:09:30.580 rw=read 00:09:30.580 time_based=1 00:09:30.580 runtime=10 00:09:30.580 ioengine=libaio 00:09:30.580 direct=1 00:09:30.580 bs=4096 00:09:30.580 iodepth=1 00:09:30.580 norandommap=1 00:09:30.580 numjobs=1 00:09:30.580 00:09:30.580 [job0] 00:09:30.580 filename=/dev/nvme0n1 00:09:30.580 [job1] 00:09:30.580 filename=/dev/nvme0n2 00:09:30.580 [job2] 00:09:30.580 filename=/dev/nvme0n3 00:09:30.580 [job3] 00:09:30.580 filename=/dev/nvme0n4 00:09:30.580 Could not set queue depth (nvme0n1) 00:09:30.580 Could not set queue depth (nvme0n2) 00:09:30.580 Could not set queue depth (nvme0n3) 00:09:30.580 Could not set queue depth (nvme0n4) 00:09:30.838 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.838 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.838 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.838 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:30.838 fio-3.35 00:09:30.838 Starting 4 threads 00:09:33.370 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:33.629 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:33.629 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=319488, buflen=4096 00:09:33.629 fio: pid=2874651, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.629 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.629 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:33.629 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15360000, buflen=4096 00:09:33.629 fio: pid=2874645, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:33.888 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:33.888 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:33.888 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7012352, buflen=4096 00:09:33.888 fio: pid=2874618, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:34.147 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1388544, buflen=4096 00:09:34.147 fio: pid=2874628, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:34.147 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.147 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:34.147 00:09:34.147 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2874618: Fri Dec 6 17:46:21 2024 00:09:34.147 read: IOPS=565, BW=2259KiB/s (2314kB/s)(6848KiB/3031msec) 00:09:34.147 slat (usec): min=3, max=26145, avg=39.19, stdev=661.80 00:09:34.147 clat (usec): min=448, max=42971, avg=1713.98, stdev=5564.71 00:09:34.147 lat (usec): min=474, max=42995, avg=1753.18, stdev=5600.99 00:09:34.147 clat percentiles (usec): 00:09:34.147 | 1.00th=[ 519], 5.00th=[ 586], 10.00th=[ 644], 20.00th=[ 701], 00:09:34.147 | 30.00th=[ 766], 40.00th=[ 906], 50.00th=[ 1037], 60.00th=[ 1074], 00:09:34.147 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1221], 00:09:34.147 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:09:34.147 | 99.99th=[42730] 00:09:34.147 bw ( KiB/s): min= 96, max= 3608, per=23.84%, avg=1758.40, stdev=1750.59, samples=5 00:09:34.147 iops : min= 24, max= 902, avg=439.60, stdev=437.65, samples=5 00:09:34.147 lat (usec) : 500=0.53%, 750=27.79%, 1000=17.28% 00:09:34.147 lat (msec) : 2=52.36%, 10=0.06%, 20=0.06%, 50=1.87% 00:09:34.147 cpu : usr=0.46%, sys=1.22%, ctx=1715, majf=0, minf=1 00:09:34.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 issued rwts: total=1713,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.147 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2874628: Fri Dec 6 17:46:21 2024 00:09:34.147 read: IOPS=106, BW=425KiB/s (435kB/s)(1356KiB/3189msec) 00:09:34.147 slat (usec): min=3, max=15800, avg=150.49, stdev=1162.05 00:09:34.147 clat (usec): min=543, max=42671, avg=9249.06, stdev=16346.91 00:09:34.147 lat (usec): min=586, max=51092, avg=9380.91, stdev=16393.02 00:09:34.147 clat percentiles (usec): 00:09:34.147 | 1.00th=[ 693], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[ 922], 00:09:34.147 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[ 1106], 60.00th=[ 1139], 00:09:34.147 | 70.00th=[ 1188], 80.00th=[35390], 90.00th=[42206], 95.00th=[42206], 00:09:34.147 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:34.147 | 99.99th=[42730] 00:09:34.147 bw ( KiB/s): min= 96, max= 905, per=5.67%, avg=418.83, stdev=374.22, samples=6 00:09:34.147 iops : min= 24, max= 226, avg=104.67, stdev=93.49, samples=6 00:09:34.147 lat (usec) : 750=3.82%, 1000=23.53% 00:09:34.147 lat (msec) : 2=52.06%, 20=0.29%, 50=20.00% 00:09:34.147 cpu : usr=0.13%, sys=0.38%, ctx=345, majf=0, minf=1 00:09:34.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.147 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2874645: Fri Dec 6 17:46:21 2024 00:09:34.147 read: IOPS=1306, BW=5226KiB/s (5352kB/s)(14.6MiB/2870msec) 00:09:34.147 slat (nsec): min=3024, max=59010, avg=15856.07, stdev=7678.10 00:09:34.147 clat (usec): min=119, max=1266, avg=739.84, stdev=138.41 00:09:34.147 lat (usec): min=129, max=1278, avg=755.69, stdev=138.71 00:09:34.147 clat percentiles (usec): 00:09:34.147 | 1.00th=[ 408], 5.00th=[ 486], 10.00th=[ 545], 20.00th=[ 619], 00:09:34.147 | 30.00th=[ 668], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 799], 00:09:34.147 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 906], 95.00th=[ 938], 00:09:34.147 | 99.00th=[ 1012], 99.50th=[ 1045], 99.90th=[ 1172], 99.95th=[ 1172], 00:09:34.147 | 99.99th=[ 1270] 00:09:34.147 bw ( KiB/s): min= 5072, max= 5600, per=72.04%, avg=5312.00, stdev=203.65, samples=5 00:09:34.147 iops : min= 1268, max= 1400, avg=1328.00, stdev=50.91, samples=5 00:09:34.147 lat (usec) : 250=0.03%, 500=6.05%, 750=41.75%, 1000=51.00% 00:09:34.147 lat (msec) : 2=1.15% 00:09:34.147 cpu : usr=0.87%, sys=2.27%, ctx=3751, majf=0, minf=2 00:09:34.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 issued rwts: total=3751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.147 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2874651: Fri Dec 6 17:46:21 2024 00:09:34.147 read: IOPS=28, BW=114KiB/s (117kB/s)(312KiB/2728msec) 00:09:34.147 slat (nsec): min=8691, max=34942, avg=25102.96, stdev=5513.92 00:09:34.147 clat (usec): min=689, max=42941, avg=34669.04, stdev=15805.60 00:09:34.147 lat (usec): min=724, max=42968, avg=34694.12, stdev=15807.14 00:09:34.147 clat percentiles (usec): 00:09:34.147 | 1.00th=[ 693], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[41681], 00:09:34.147 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:34.147 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:09:34.147 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:09:34.147 | 99.99th=[42730] 00:09:34.147 bw ( KiB/s): min= 96, max= 200, per=1.57%, avg=116.80, stdev=46.51, samples=5 00:09:34.147 iops : min= 24, max= 50, avg=29.20, stdev=11.63, samples=5 00:09:34.147 lat (usec) : 750=1.27%, 1000=2.53% 00:09:34.147 lat (msec) : 2=13.92%, 50=81.01% 00:09:34.147 cpu : usr=0.15%, sys=0.00%, ctx=79, majf=0, minf=2 00:09:34.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.147 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.147 00:09:34.147 Run status group 0 (all jobs): 00:09:34.147 READ: bw=7374KiB/s (7551kB/s), 114KiB/s-5226KiB/s (117kB/s-5352kB/s), io=23.0MiB (24.1MB), run=2728-3189msec 00:09:34.147 00:09:34.147 Disk stats (read/write): 00:09:34.147 nvme0n1: ios=1552/0, merge=0/0, ticks=2763/0, in_queue=2763, util=95.26% 00:09:34.147 nvme0n2: ios=337/0, merge=0/0, ticks=3042/0, in_queue=3042, util=95.21% 00:09:34.147 nvme0n3: ios=3750/0, merge=0/0, ticks=2696/0, in_queue=2696, util=96.49% 00:09:34.147 nvme0n4: ios=75/0, merge=0/0, ticks=2581/0, in_queue=2581, util=96.45% 00:09:34.147 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.147 17:46:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:34.406 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.406 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:34.664 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.664 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:34.664 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:34.664 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2874324 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:34.922 nvmf hotplug test: fio failed as expected 00:09:34.922 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.181 rmmod nvme_tcp 00:09:35.181 rmmod nvme_fabrics 00:09:35.181 rmmod nvme_keyring 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2870818 ']' 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2870818 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2870818 ']' 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2870818 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2870818 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2870818' 00:09:35.181 killing process with pid 2870818 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2870818 00:09:35.181 17:46:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2870818 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.441 17:46:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.343 00:09:37.343 real 0m25.390s 00:09:37.343 user 2m16.524s 00:09:37.343 sys 0m7.387s 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 ************************************ 00:09:37.343 END TEST nvmf_fio_target 00:09:37.343 ************************************ 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.343 ************************************ 00:09:37.343 START TEST nvmf_bdevio 00:09:37.343 ************************************ 00:09:37.343 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:37.601 * Looking for test storage... 00:09:37.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.601 --rc genhtml_branch_coverage=1 00:09:37.601 --rc genhtml_function_coverage=1 00:09:37.601 --rc genhtml_legend=1 00:09:37.601 --rc geninfo_all_blocks=1 00:09:37.601 --rc geninfo_unexecuted_blocks=1 00:09:37.601 00:09:37.601 ' 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.601 --rc genhtml_branch_coverage=1 00:09:37.601 --rc genhtml_function_coverage=1 00:09:37.601 --rc genhtml_legend=1 00:09:37.601 --rc geninfo_all_blocks=1 00:09:37.601 --rc geninfo_unexecuted_blocks=1 00:09:37.601 00:09:37.601 ' 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.601 --rc genhtml_branch_coverage=1 00:09:37.601 --rc genhtml_function_coverage=1 00:09:37.601 --rc genhtml_legend=1 00:09:37.601 --rc geninfo_all_blocks=1 00:09:37.601 --rc geninfo_unexecuted_blocks=1 00:09:37.601 00:09:37.601 ' 00:09:37.601 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.601 --rc genhtml_branch_coverage=1 00:09:37.601 --rc genhtml_function_coverage=1 00:09:37.601 --rc genhtml_legend=1 00:09:37.601 --rc geninfo_all_blocks=1 00:09:37.601 --rc geninfo_unexecuted_blocks=1 00:09:37.601 00:09:37.602 ' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.602 17:46:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.877 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:42.878 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:42.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:42.878 Found net devices under 0000:31:00.0: cvl_0_0 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:42.878 Found net devices under 0000:31:00.1: cvl_0_1 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.878 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:09:43.138 00:09:43.138 --- 10.0.0.2 ping statistics --- 00:09:43.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.138 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:09:43.138 00:09:43.138 --- 10.0.0.1 ping statistics --- 00:09:43.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.138 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2880132 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2880132 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2880132 ']' 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.138 17:46:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:43.397 [2024-12-06 17:46:31.005460] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:09:43.397 [2024-12-06 17:46:31.005526] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.397 [2024-12-06 17:46:31.086263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.397 [2024-12-06 17:46:31.123567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.397 [2024-12-06 17:46:31.123601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.397 [2024-12-06 17:46:31.123607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.397 [2024-12-06 17:46:31.123612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.397 [2024-12-06 17:46:31.123617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.397 [2024-12-06 17:46:31.124978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.397 [2024-12-06 17:46:31.125149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.397 [2024-12-06 17:46:31.125474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.397 [2024-12-06 17:46:31.125475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.334 [2024-12-06 17:46:31.822226] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.334 Malloc0 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:44.334 [2024-12-06 17:46:31.875663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:44.334 { 00:09:44.334 "params": { 00:09:44.334 "name": "Nvme$subsystem", 00:09:44.334 "trtype": "$TEST_TRANSPORT", 00:09:44.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.334 "adrfam": "ipv4", 00:09:44.334 "trsvcid": "$NVMF_PORT", 00:09:44.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.334 "hdgst": ${hdgst:-false}, 00:09:44.334 "ddgst": ${ddgst:-false} 00:09:44.334 }, 00:09:44.334 "method": "bdev_nvme_attach_controller" 00:09:44.334 } 00:09:44.334 EOF 00:09:44.334 )") 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:44.334 17:46:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:44.334 "params": { 00:09:44.334 "name": "Nvme1", 00:09:44.334 "trtype": "tcp", 00:09:44.334 "traddr": "10.0.0.2", 00:09:44.334 "adrfam": "ipv4", 00:09:44.334 "trsvcid": "4420", 00:09:44.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.334 "hdgst": false, 00:09:44.334 "ddgst": false 00:09:44.334 }, 00:09:44.334 "method": "bdev_nvme_attach_controller" 00:09:44.334 }' 00:09:44.334 [2024-12-06 17:46:31.912959] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:09:44.334 [2024-12-06 17:46:31.913012] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880240 ] 00:09:44.334 [2024-12-06 17:46:31.992091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.334 [2024-12-06 17:46:32.030750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.334 [2024-12-06 17:46:32.030903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.334 [2024-12-06 17:46:32.030903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.593 I/O targets: 00:09:44.593 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:44.593 00:09:44.593 00:09:44.593 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.593 http://cunit.sourceforge.net/ 00:09:44.593 00:09:44.593 00:09:44.593 Suite: bdevio tests on: Nvme1n1 00:09:44.593 Test: blockdev write read block ...passed 00:09:44.593 Test: blockdev write zeroes read block ...passed 00:09:44.593 Test: blockdev write zeroes read no split ...passed 00:09:44.593 Test: blockdev write zeroes read split ...passed 00:09:44.593 Test: blockdev write zeroes read split partial ...passed 00:09:44.593 Test: blockdev reset ...[2024-12-06 17:46:32.328948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:44.593 [2024-12-06 17:46:32.329022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ad0e0 (9): Bad file descriptor 00:09:44.851 [2024-12-06 17:46:32.437757] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:44.851 passed 00:09:44.851 Test: blockdev write read 8 blocks ...passed 00:09:44.851 Test: blockdev write read size > 128k ...passed 00:09:44.851 Test: blockdev write read invalid size ...passed 00:09:44.851 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.851 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.851 Test: blockdev write read max offset ...passed 00:09:44.851 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.851 Test: blockdev writev readv 8 blocks ...passed 00:09:44.851 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.851 Test: blockdev writev readv block ...passed 00:09:44.851 Test: blockdev writev readv size > 128k ...passed 00:09:44.851 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.851 Test: blockdev comparev and writev ...[2024-12-06 17:46:32.620433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.620459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.620470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.620476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.620919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.620927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.620937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.620942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.621371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.621379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.621389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.621395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.621831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.621838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:44.851 [2024-12-06 17:46:32.621849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:44.851 [2024-12-06 17:46:32.621855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:44.851 passed 00:09:45.109 Test: blockdev nvme passthru rw ...passed 00:09:45.109 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:46:32.706966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.109 [2024-12-06 17:46:32.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:45.109 [2024-12-06 17:46:32.707313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.109 [2024-12-06 17:46:32.707321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:45.109 [2024-12-06 17:46:32.707690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.109 [2024-12-06 17:46:32.707697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:45.109 [2024-12-06 17:46:32.708033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:45.109 [2024-12-06 17:46:32.708040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:45.109 passed 00:09:45.109 Test: blockdev nvme admin passthru ...passed 00:09:45.109 Test: blockdev copy ...passed 00:09:45.109 00:09:45.109 Run Summary: Type Total Ran Passed Failed Inactive 00:09:45.109 suites 1 1 n/a 0 0 00:09:45.109 tests 23 23 23 0 0 00:09:45.110 asserts 152 152 152 0 n/a 00:09:45.110 00:09:45.110 Elapsed time = 1.089 seconds 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.110 rmmod nvme_tcp 00:09:45.110 rmmod nvme_fabrics 00:09:45.110 rmmod nvme_keyring 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2880132 ']' 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2880132 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2880132 ']' 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2880132 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.110 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880132 00:09:45.368 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:45.368 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:45.368 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880132' 00:09:45.368 killing process with pid 2880132 00:09:45.368 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2880132 00:09:45.368 17:46:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2880132 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.368 17:46:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.903 00:09:47.903 real 0m9.978s 00:09:47.903 user 0m11.270s 00:09:47.903 sys 0m4.777s 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.903 ************************************ 00:09:47.903 END TEST nvmf_bdevio 00:09:47.903 ************************************ 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:47.903 00:09:47.903 real 4m30.251s 00:09:47.903 user 10m57.477s 00:09:47.903 sys 1m26.979s 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.903 ************************************ 00:09:47.903 END TEST nvmf_target_core 00:09:47.903 ************************************ 00:09:47.903 17:46:35 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:47.903 17:46:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.903 17:46:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.903 17:46:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.903 ************************************ 00:09:47.903 START TEST nvmf_target_extra 00:09:47.903 ************************************ 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:47.903 * Looking for test storage... 00:09:47.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.903 --rc genhtml_branch_coverage=1 00:09:47.903 --rc genhtml_function_coverage=1 00:09:47.903 --rc genhtml_legend=1 00:09:47.903 --rc geninfo_all_blocks=1 00:09:47.903 --rc geninfo_unexecuted_blocks=1 00:09:47.903 00:09:47.903 ' 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.903 --rc genhtml_branch_coverage=1 00:09:47.903 --rc genhtml_function_coverage=1 00:09:47.903 --rc genhtml_legend=1 00:09:47.903 --rc geninfo_all_blocks=1 00:09:47.903 --rc geninfo_unexecuted_blocks=1 00:09:47.903 00:09:47.903 ' 00:09:47.903 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.904 --rc genhtml_branch_coverage=1 00:09:47.904 --rc genhtml_function_coverage=1 00:09:47.904 --rc genhtml_legend=1 00:09:47.904 --rc geninfo_all_blocks=1 00:09:47.904 --rc geninfo_unexecuted_blocks=1 00:09:47.904 00:09:47.904 ' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.904 --rc genhtml_branch_coverage=1 00:09:47.904 --rc genhtml_function_coverage=1 00:09:47.904 --rc genhtml_legend=1 00:09:47.904 --rc geninfo_all_blocks=1 00:09:47.904 --rc geninfo_unexecuted_blocks=1 00:09:47.904 00:09:47.904 ' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:47.904 ************************************ 00:09:47.904 START TEST nvmf_example 00:09:47.904 ************************************ 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:47.904 * Looking for test storage... 00:09:47.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.904 --rc genhtml_branch_coverage=1 00:09:47.904 --rc genhtml_function_coverage=1 00:09:47.904 --rc genhtml_legend=1 00:09:47.904 --rc geninfo_all_blocks=1 00:09:47.904 --rc geninfo_unexecuted_blocks=1 00:09:47.904 00:09:47.904 ' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.904 --rc genhtml_branch_coverage=1 00:09:47.904 --rc genhtml_function_coverage=1 00:09:47.904 --rc genhtml_legend=1 00:09:47.904 --rc geninfo_all_blocks=1 00:09:47.904 --rc geninfo_unexecuted_blocks=1 00:09:47.904 00:09:47.904 ' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.904 --rc genhtml_branch_coverage=1 00:09:47.904 --rc genhtml_function_coverage=1 00:09:47.904 --rc genhtml_legend=1 00:09:47.904 --rc geninfo_all_blocks=1 00:09:47.904 --rc geninfo_unexecuted_blocks=1 00:09:47.904 00:09:47.904 ' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:47.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.904 --rc genhtml_branch_coverage=1 00:09:47.904 --rc genhtml_function_coverage=1 00:09:47.904 --rc genhtml_legend=1 00:09:47.904 --rc geninfo_all_blocks=1 00:09:47.904 --rc geninfo_unexecuted_blocks=1 00:09:47.904 00:09:47.904 ' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.904 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.905 17:46:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:53.175 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:53.175 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:53.175 Found net devices under 0000:31:00.0: cvl_0_0 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.175 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:53.176 Found net devices under 0000:31:00.1: cvl_0_1 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.176 17:46:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:09:53.435 00:09:53.435 --- 10.0.0.2 ping statistics --- 00:09:53.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.435 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:09:53.435 00:09:53.435 --- 10.0.0.1 ping statistics --- 00:09:53.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.435 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2884987 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2884987 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2884987 ']' 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.435 17:46:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:54.373 17:46:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:06.577 Initializing NVMe Controllers 00:10:06.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:06.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:06.577 Initialization complete. Launching workers. 00:10:06.577 ======================================================== 00:10:06.577 Latency(us) 00:10:06.577 Device Information : IOPS MiB/s Average min max 00:10:06.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19891.69 77.70 3217.38 611.06 15492.75 00:10:06.577 ======================================================== 00:10:06.577 Total : 19891.69 77.70 3217.38 611.06 15492.75 00:10:06.577 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.577 rmmod nvme_tcp 00:10:06.577 rmmod nvme_fabrics 00:10:06.577 rmmod nvme_keyring 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:06.577 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2884987 ']' 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2884987 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2884987 ']' 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2884987 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2884987 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2884987' 00:10:06.578 killing process with pid 2884987 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2884987 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2884987 00:10:06.578 nvmf threads initialize successfully 00:10:06.578 bdev subsystem init successfully 00:10:06.578 created a nvmf target service 00:10:06.578 create targets's poll groups done 00:10:06.578 all subsystems of target started 00:10:06.578 nvmf target is running 00:10:06.578 all subsystems of target stopped 00:10:06.578 destroy targets's poll groups done 00:10:06.578 destroyed the nvmf target service 00:10:06.578 bdev subsystem finish successfully 00:10:06.578 nvmf threads destroy successfully 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.578 17:46:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.149 00:10:07.149 real 0m19.325s 00:10:07.149 user 0m45.554s 00:10:07.149 sys 0m5.568s 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.149 ************************************ 00:10:07.149 END TEST nvmf_example 00:10:07.149 ************************************ 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:07.149 ************************************ 00:10:07.149 START TEST nvmf_filesystem 00:10:07.149 ************************************ 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:07.149 * Looking for test storage... 00:10:07.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.149 --rc genhtml_branch_coverage=1 00:10:07.149 --rc genhtml_function_coverage=1 00:10:07.149 --rc genhtml_legend=1 00:10:07.149 --rc geninfo_all_blocks=1 00:10:07.149 --rc geninfo_unexecuted_blocks=1 00:10:07.149 00:10:07.149 ' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.149 --rc genhtml_branch_coverage=1 00:10:07.149 --rc genhtml_function_coverage=1 00:10:07.149 --rc genhtml_legend=1 00:10:07.149 --rc geninfo_all_blocks=1 00:10:07.149 --rc geninfo_unexecuted_blocks=1 00:10:07.149 00:10:07.149 ' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.149 --rc genhtml_branch_coverage=1 00:10:07.149 --rc genhtml_function_coverage=1 00:10:07.149 --rc genhtml_legend=1 00:10:07.149 --rc geninfo_all_blocks=1 00:10:07.149 --rc geninfo_unexecuted_blocks=1 00:10:07.149 00:10:07.149 ' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.149 --rc genhtml_branch_coverage=1 00:10:07.149 --rc genhtml_function_coverage=1 00:10:07.149 --rc genhtml_legend=1 00:10:07.149 --rc geninfo_all_blocks=1 00:10:07.149 --rc geninfo_unexecuted_blocks=1 00:10:07.149 00:10:07.149 ' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:07.149 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:07.150 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:07.150 #define SPDK_CONFIG_H 00:10:07.150 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:07.150 #define SPDK_CONFIG_APPS 1 00:10:07.150 #define SPDK_CONFIG_ARCH native 00:10:07.150 #undef SPDK_CONFIG_ASAN 00:10:07.150 #undef SPDK_CONFIG_AVAHI 00:10:07.150 #undef SPDK_CONFIG_CET 00:10:07.150 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:07.150 #define SPDK_CONFIG_COVERAGE 1 00:10:07.150 #define SPDK_CONFIG_CROSS_PREFIX 00:10:07.150 #undef SPDK_CONFIG_CRYPTO 00:10:07.150 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:07.150 #undef SPDK_CONFIG_CUSTOMOCF 00:10:07.150 #undef SPDK_CONFIG_DAOS 00:10:07.150 #define SPDK_CONFIG_DAOS_DIR 00:10:07.150 #define SPDK_CONFIG_DEBUG 1 00:10:07.150 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:07.150 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:07.150 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:07.150 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:07.150 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:07.150 #undef SPDK_CONFIG_DPDK_UADK 00:10:07.150 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:07.150 #define SPDK_CONFIG_EXAMPLES 1 00:10:07.150 #undef SPDK_CONFIG_FC 00:10:07.150 #define SPDK_CONFIG_FC_PATH 00:10:07.151 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:07.151 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:07.151 #define SPDK_CONFIG_FSDEV 1 00:10:07.151 #undef SPDK_CONFIG_FUSE 00:10:07.151 #undef SPDK_CONFIG_FUZZER 00:10:07.151 #define SPDK_CONFIG_FUZZER_LIB 00:10:07.151 #undef SPDK_CONFIG_GOLANG 00:10:07.151 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:07.151 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:07.151 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:07.151 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:07.151 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:07.151 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:07.151 #undef SPDK_CONFIG_HAVE_LZ4 00:10:07.151 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:07.151 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:07.151 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:07.151 #define SPDK_CONFIG_IDXD 1 00:10:07.151 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:07.151 #undef SPDK_CONFIG_IPSEC_MB 00:10:07.151 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:07.151 #define SPDK_CONFIG_ISAL 1 00:10:07.151 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:07.151 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:07.151 #define SPDK_CONFIG_LIBDIR 00:10:07.151 #undef SPDK_CONFIG_LTO 00:10:07.151 #define SPDK_CONFIG_MAX_LCORES 128 00:10:07.151 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:07.151 #define SPDK_CONFIG_NVME_CUSE 1 00:10:07.151 #undef SPDK_CONFIG_OCF 00:10:07.151 #define SPDK_CONFIG_OCF_PATH 00:10:07.151 #define SPDK_CONFIG_OPENSSL_PATH 00:10:07.151 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:07.151 #define SPDK_CONFIG_PGO_DIR 00:10:07.151 #undef SPDK_CONFIG_PGO_USE 00:10:07.151 #define SPDK_CONFIG_PREFIX /usr/local 00:10:07.151 #undef SPDK_CONFIG_RAID5F 00:10:07.151 #undef SPDK_CONFIG_RBD 00:10:07.151 #define SPDK_CONFIG_RDMA 1 00:10:07.151 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:07.151 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:07.151 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:07.151 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:07.151 #define SPDK_CONFIG_SHARED 1 00:10:07.151 #undef SPDK_CONFIG_SMA 00:10:07.151 #define SPDK_CONFIG_TESTS 1 00:10:07.151 #undef SPDK_CONFIG_TSAN 00:10:07.151 #define SPDK_CONFIG_UBLK 1 00:10:07.151 #define SPDK_CONFIG_UBSAN 1 00:10:07.151 #undef SPDK_CONFIG_UNIT_TESTS 00:10:07.151 #undef SPDK_CONFIG_URING 00:10:07.151 #define SPDK_CONFIG_URING_PATH 00:10:07.151 #undef SPDK_CONFIG_URING_ZNS 00:10:07.151 #undef SPDK_CONFIG_USDT 00:10:07.151 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:07.151 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:07.151 #define SPDK_CONFIG_VFIO_USER 1 00:10:07.151 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:07.151 #define SPDK_CONFIG_VHOST 1 00:10:07.151 #define SPDK_CONFIG_VIRTIO 1 00:10:07.151 #undef SPDK_CONFIG_VTUNE 00:10:07.151 #define SPDK_CONFIG_VTUNE_DIR 00:10:07.151 #define SPDK_CONFIG_WERROR 1 00:10:07.151 #define SPDK_CONFIG_WPDK_DIR 00:10:07.151 #undef SPDK_CONFIG_XNVME 00:10:07.151 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:07.151 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.152 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2888080 ]] 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2888080 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:07.153 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.UaA6Km 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.UaA6Km/tests/target /tmp/spdk.UaA6Km 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122502238208 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356533760 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6854295552 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847713792 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23592960 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=349184 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=154624 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677838848 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678268928 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=430080 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:07.154 * Looking for test storage... 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122502238208 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9068888064 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.154 17:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.414 --rc genhtml_branch_coverage=1 00:10:07.414 --rc genhtml_function_coverage=1 00:10:07.414 --rc genhtml_legend=1 00:10:07.414 --rc geninfo_all_blocks=1 00:10:07.414 --rc geninfo_unexecuted_blocks=1 00:10:07.414 00:10:07.414 ' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.414 --rc genhtml_branch_coverage=1 00:10:07.414 --rc genhtml_function_coverage=1 00:10:07.414 --rc genhtml_legend=1 00:10:07.414 --rc geninfo_all_blocks=1 00:10:07.414 --rc geninfo_unexecuted_blocks=1 00:10:07.414 00:10:07.414 ' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.414 --rc genhtml_branch_coverage=1 00:10:07.414 --rc genhtml_function_coverage=1 00:10:07.414 --rc genhtml_legend=1 00:10:07.414 --rc geninfo_all_blocks=1 00:10:07.414 --rc geninfo_unexecuted_blocks=1 00:10:07.414 00:10:07.414 ' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.414 --rc genhtml_branch_coverage=1 00:10:07.414 --rc genhtml_function_coverage=1 00:10:07.414 --rc genhtml_legend=1 00:10:07.414 --rc geninfo_all_blocks=1 00:10:07.414 --rc geninfo_unexecuted_blocks=1 00:10:07.414 00:10:07.414 ' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.414 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.415 17:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:12.688 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:12.688 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.688 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:12.688 Found net devices under 0000:31:00.0: cvl_0_0 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:12.689 Found net devices under 0000:31:00.1: cvl_0_1 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.689 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:10:12.949 00:10:12.949 --- 10.0.0.2 ping statistics --- 00:10:12.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.949 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:10:12.949 00:10:12.949 --- 10.0.0.1 ping statistics --- 00:10:12.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.949 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.949 ************************************ 00:10:12.949 START TEST nvmf_filesystem_no_in_capsule 00:10:12.949 ************************************ 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2892051 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2892051 00:10:12.949 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2892051 ']' 00:10:12.950 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.950 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.950 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.950 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.950 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.950 17:47:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.212 [2024-12-06 17:47:00.786961] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:10:13.212 [2024-12-06 17:47:00.787023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.212 [2024-12-06 17:47:00.865593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.212 [2024-12-06 17:47:00.904288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.212 [2024-12-06 17:47:00.904326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.212 [2024-12-06 17:47:00.904333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.212 [2024-12-06 17:47:00.904338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.212 [2024-12-06 17:47:00.904343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.212 [2024-12-06 17:47:00.905803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.212 [2024-12-06 17:47:00.905966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.212 [2024-12-06 17:47:00.906136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.212 [2024-12-06 17:47:00.906138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.782 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.782 [2024-12-06 17:47:01.604087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.041 Malloc1 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.041 [2024-12-06 17:47:01.733933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:14.041 { 00:10:14.041 "name": "Malloc1", 00:10:14.041 "aliases": [ 00:10:14.041 "0331368b-6d09-441a-8b02-298eb44b180d" 00:10:14.041 ], 00:10:14.041 "product_name": "Malloc disk", 00:10:14.041 "block_size": 512, 00:10:14.041 "num_blocks": 1048576, 00:10:14.041 "uuid": "0331368b-6d09-441a-8b02-298eb44b180d", 00:10:14.041 "assigned_rate_limits": { 00:10:14.041 "rw_ios_per_sec": 0, 00:10:14.041 "rw_mbytes_per_sec": 0, 00:10:14.041 "r_mbytes_per_sec": 0, 00:10:14.041 "w_mbytes_per_sec": 0 00:10:14.041 }, 00:10:14.041 "claimed": true, 00:10:14.041 "claim_type": "exclusive_write", 00:10:14.041 "zoned": false, 00:10:14.041 "supported_io_types": { 00:10:14.041 "read": true, 00:10:14.041 "write": true, 00:10:14.041 "unmap": true, 00:10:14.041 "flush": true, 00:10:14.041 "reset": true, 00:10:14.041 "nvme_admin": false, 00:10:14.041 "nvme_io": false, 00:10:14.041 "nvme_io_md": false, 00:10:14.041 "write_zeroes": true, 00:10:14.041 "zcopy": true, 00:10:14.041 "get_zone_info": false, 00:10:14.041 "zone_management": false, 00:10:14.041 "zone_append": false, 00:10:14.041 "compare": false, 00:10:14.041 "compare_and_write": false, 00:10:14.041 "abort": true, 00:10:14.041 "seek_hole": false, 00:10:14.041 "seek_data": false, 00:10:14.041 "copy": true, 00:10:14.041 "nvme_iov_md": false 00:10:14.041 }, 00:10:14.041 "memory_domains": [ 00:10:14.041 { 00:10:14.041 "dma_device_id": "system", 00:10:14.041 "dma_device_type": 1 00:10:14.041 }, 00:10:14.041 { 00:10:14.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.041 "dma_device_type": 2 00:10:14.041 } 00:10:14.041 ], 00:10:14.041 "driver_specific": {} 00:10:14.041 } 00:10:14.041 ]' 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:14.041 17:47:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.945 17:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.945 17:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:15.945 17:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.945 17:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:15.945 17:47:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.850 17:47:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:18.963 17:47:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.899 ************************************ 00:10:19.899 START TEST filesystem_ext4 00:10:19.899 ************************************ 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:19.899 mke2fs 1.47.0 (5-Feb-2023) 00:10:19.899 Discarding device blocks: 0/522240 done 00:10:19.899 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:19.899 Filesystem UUID: 4a110711-73f0-4b1d-9234-e107e2f9cc81 00:10:19.899 Superblock backups stored on blocks: 00:10:19.899 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:19.899 00:10:19.899 Allocating group tables: 0/64 done 00:10:19.899 Writing inode tables: 0/64 done 00:10:19.899 Creating journal (8192 blocks): done 00:10:19.899 Writing superblocks and filesystem accounting information: 0/64 done 00:10:19.899 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:19.899 17:47:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2892051 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.457 00:10:26.457 real 0m6.129s 00:10:26.457 user 0m0.012s 00:10:26.457 sys 0m0.063s 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:26.457 ************************************ 00:10:26.457 END TEST filesystem_ext4 00:10:26.457 ************************************ 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.457 ************************************ 00:10:26.457 START TEST filesystem_btrfs 00:10:26.457 ************************************ 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:26.457 btrfs-progs v6.8.1 00:10:26.457 See https://btrfs.readthedocs.io for more information. 00:10:26.457 00:10:26.457 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:26.457 NOTE: several default settings have changed in version 5.15, please make sure 00:10:26.457 this does not affect your deployments: 00:10:26.457 - DUP for metadata (-m dup) 00:10:26.457 - enabled no-holes (-O no-holes) 00:10:26.457 - enabled free-space-tree (-R free-space-tree) 00:10:26.457 00:10:26.457 Label: (null) 00:10:26.457 UUID: ef1dc4fb-0ed2-4a64-9ea7-39ad7d791d7c 00:10:26.457 Node size: 16384 00:10:26.457 Sector size: 4096 (CPU page size: 4096) 00:10:26.457 Filesystem size: 510.00MiB 00:10:26.457 Block group profiles: 00:10:26.457 Data: single 8.00MiB 00:10:26.457 Metadata: DUP 32.00MiB 00:10:26.457 System: DUP 8.00MiB 00:10:26.457 SSD detected: yes 00:10:26.457 Zoned device: no 00:10:26.457 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:26.457 Checksum: crc32c 00:10:26.457 Number of devices: 1 00:10:26.457 Devices: 00:10:26.457 ID SIZE PATH 00:10:26.457 1 510.00MiB /dev/nvme0n1p1 00:10:26.457 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:26.457 17:47:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2892051 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.716 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.717 00:10:26.717 real 0m0.904s 00:10:26.717 user 0m0.023s 00:10:26.717 sys 0m0.086s 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:26.717 ************************************ 00:10:26.717 END TEST filesystem_btrfs 00:10:26.717 ************************************ 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.717 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.975 ************************************ 00:10:26.975 START TEST filesystem_xfs 00:10:26.975 ************************************ 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.975 17:47:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:26.975 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:26.975 = sectsz=512 attr=2, projid32bit=1 00:10:26.975 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:26.975 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:26.975 data = bsize=4096 blocks=130560, imaxpct=25 00:10:26.975 = sunit=0 swidth=0 blks 00:10:26.975 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:26.975 log =internal log bsize=4096 blocks=16384, version=2 00:10:26.975 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:26.975 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:27.542 Discarding blocks...Done. 00:10:27.542 17:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:27.542 17:47:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.075 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2892051 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.334 00:10:30.334 real 0m3.448s 00:10:30.334 user 0m0.015s 00:10:30.334 sys 0m0.058s 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.334 17:47:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.334 ************************************ 00:10:30.334 END TEST filesystem_xfs 00:10:30.334 ************************************ 00:10:30.334 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:30.334 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:30.334 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:30.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2892051 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2892051 ']' 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2892051 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892051 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892051' 00:10:30.594 killing process with pid 2892051 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2892051 00:10:30.594 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2892051 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:30.854 00:10:30.854 real 0m17.743s 00:10:30.854 user 1m10.081s 00:10:30.854 sys 0m1.119s 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.854 ************************************ 00:10:30.854 END TEST nvmf_filesystem_no_in_capsule 00:10:30.854 ************************************ 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:30.854 ************************************ 00:10:30.854 START TEST nvmf_filesystem_in_capsule 00:10:30.854 ************************************ 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2896280 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2896280 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2896280 ']' 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.854 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.854 [2024-12-06 17:47:18.573086] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:10:30.854 [2024-12-06 17:47:18.573143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.854 [2024-12-06 17:47:18.644734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.854 [2024-12-06 17:47:18.675686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.854 [2024-12-06 17:47:18.675718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.854 [2024-12-06 17:47:18.675723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.854 [2024-12-06 17:47:18.675729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.854 [2024-12-06 17:47:18.675733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.854 [2024-12-06 17:47:18.677028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.854 [2024-12-06 17:47:18.677179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.854 [2024-12-06 17:47:18.677218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.854 [2024-12-06 17:47:18.677219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 [2024-12-06 17:47:18.777632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 [2024-12-06 17:47:18.898825] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.114 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:31.114 { 00:10:31.114 "name": "Malloc1", 00:10:31.114 "aliases": [ 00:10:31.114 "45f890b0-53ca-4bd7-a5e1-84dc12c35b25" 00:10:31.114 ], 00:10:31.114 "product_name": "Malloc disk", 00:10:31.114 "block_size": 512, 00:10:31.115 "num_blocks": 1048576, 00:10:31.115 "uuid": "45f890b0-53ca-4bd7-a5e1-84dc12c35b25", 00:10:31.115 "assigned_rate_limits": { 00:10:31.115 "rw_ios_per_sec": 0, 00:10:31.115 "rw_mbytes_per_sec": 0, 00:10:31.115 "r_mbytes_per_sec": 0, 00:10:31.115 "w_mbytes_per_sec": 0 00:10:31.115 }, 00:10:31.115 "claimed": true, 00:10:31.115 "claim_type": "exclusive_write", 00:10:31.115 "zoned": false, 00:10:31.115 "supported_io_types": { 00:10:31.115 "read": true, 00:10:31.115 "write": true, 00:10:31.115 "unmap": true, 00:10:31.115 "flush": true, 00:10:31.115 "reset": true, 00:10:31.115 "nvme_admin": false, 00:10:31.115 "nvme_io": false, 00:10:31.115 "nvme_io_md": false, 00:10:31.115 "write_zeroes": true, 00:10:31.115 "zcopy": true, 00:10:31.115 "get_zone_info": false, 00:10:31.115 "zone_management": false, 00:10:31.115 "zone_append": false, 00:10:31.115 "compare": false, 00:10:31.115 "compare_and_write": false, 00:10:31.115 "abort": true, 00:10:31.115 "seek_hole": false, 00:10:31.115 "seek_data": false, 00:10:31.115 "copy": true, 00:10:31.115 "nvme_iov_md": false 00:10:31.115 }, 00:10:31.115 "memory_domains": [ 00:10:31.115 { 00:10:31.115 "dma_device_id": "system", 00:10:31.115 "dma_device_type": 1 00:10:31.115 }, 00:10:31.115 { 00:10:31.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.115 "dma_device_type": 2 00:10:31.115 } 00:10:31.115 ], 00:10:31.115 "driver_specific": {} 00:10:31.115 } 00:10:31.115 ]' 00:10:31.115 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:31.374 17:47:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.756 17:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.756 17:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.756 17:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.756 17:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.756 17:47:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:35.282 17:47:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:35.846 17:47:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.780 ************************************ 00:10:36.780 START TEST filesystem_in_capsule_ext4 00:10:36.780 ************************************ 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:36.780 17:47:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:36.780 mke2fs 1.47.0 (5-Feb-2023) 00:10:36.780 Discarding device blocks: 0/522240 done 00:10:36.780 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:36.780 Filesystem UUID: d8f1f07d-7877-4664-a3e8-4d610554e3bd 00:10:36.780 Superblock backups stored on blocks: 00:10:36.780 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:36.780 00:10:36.780 Allocating group tables: 0/64 done 00:10:36.780 Writing inode tables: 0/64 done 00:10:37.037 Creating journal (8192 blocks): done 00:10:39.348 Writing superblocks and filesystem accounting information: 0/64 done 00:10:39.348 00:10:39.348 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:39.348 17:47:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2896280 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.913 00:10:45.913 real 0m8.388s 00:10:45.913 user 0m0.017s 00:10:45.913 sys 0m0.062s 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:45.913 ************************************ 00:10:45.913 END TEST filesystem_in_capsule_ext4 00:10:45.913 ************************************ 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.913 ************************************ 00:10:45.913 START TEST filesystem_in_capsule_btrfs 00:10:45.913 ************************************ 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:45.913 17:47:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:45.913 btrfs-progs v6.8.1 00:10:45.913 See https://btrfs.readthedocs.io for more information. 00:10:45.913 00:10:45.913 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:45.913 NOTE: several default settings have changed in version 5.15, please make sure 00:10:45.913 this does not affect your deployments: 00:10:45.913 - DUP for metadata (-m dup) 00:10:45.913 - enabled no-holes (-O no-holes) 00:10:45.913 - enabled free-space-tree (-R free-space-tree) 00:10:45.913 00:10:45.913 Label: (null) 00:10:45.913 UUID: 3378ba8c-6fe5-47aa-be7c-b33ea0a95b52 00:10:45.913 Node size: 16384 00:10:45.913 Sector size: 4096 (CPU page size: 4096) 00:10:45.913 Filesystem size: 510.00MiB 00:10:45.913 Block group profiles: 00:10:45.913 Data: single 8.00MiB 00:10:45.913 Metadata: DUP 32.00MiB 00:10:45.913 System: DUP 8.00MiB 00:10:45.914 SSD detected: yes 00:10:45.914 Zoned device: no 00:10:45.914 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:45.914 Checksum: crc32c 00:10:45.914 Number of devices: 1 00:10:45.914 Devices: 00:10:45.914 ID SIZE PATH 00:10:45.914 1 510.00MiB /dev/nvme0n1p1 00:10:45.914 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2896280 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.914 00:10:45.914 real 0m0.560s 00:10:45.914 user 0m0.022s 00:10:45.914 sys 0m0.090s 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.914 ************************************ 00:10:45.914 END TEST filesystem_in_capsule_btrfs 00:10:45.914 ************************************ 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.914 ************************************ 00:10:45.914 START TEST filesystem_in_capsule_xfs 00:10:45.914 ************************************ 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:45.914 17:47:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:45.914 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:45.914 = sectsz=512 attr=2, projid32bit=1 00:10:45.914 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:45.914 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:45.914 data = bsize=4096 blocks=130560, imaxpct=25 00:10:45.914 = sunit=0 swidth=0 blks 00:10:45.914 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:45.914 log =internal log bsize=4096 blocks=16384, version=2 00:10:45.914 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:45.914 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:46.849 Discarding blocks...Done. 00:10:46.849 17:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:46.849 17:47:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2896280 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:48.774 00:10:48.774 real 0m2.684s 00:10:48.774 user 0m0.020s 00:10:48.774 sys 0m0.055s 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:48.774 ************************************ 00:10:48.774 END TEST filesystem_in_capsule_xfs 00:10:48.774 ************************************ 00:10:48.774 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2896280 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2896280 ']' 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2896280 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896280 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896280' 00:10:49.034 killing process with pid 2896280 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2896280 00:10:49.034 17:47:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2896280 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:49.292 00:10:49.292 real 0m18.489s 00:10:49.292 user 1m12.977s 00:10:49.292 sys 0m1.116s 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.292 ************************************ 00:10:49.292 END TEST nvmf_filesystem_in_capsule 00:10:49.292 ************************************ 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.292 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.293 rmmod nvme_tcp 00:10:49.293 rmmod nvme_fabrics 00:10:49.293 rmmod nvme_keyring 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.293 17:47:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.829 00:10:51.829 real 0m44.391s 00:10:51.829 user 2m24.568s 00:10:51.829 sys 0m6.764s 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.829 ************************************ 00:10:51.829 END TEST nvmf_filesystem 00:10:51.829 ************************************ 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.829 17:47:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.829 ************************************ 00:10:51.830 START TEST nvmf_target_discovery 00:10:51.830 ************************************ 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:51.830 * Looking for test storage... 00:10:51.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.830 --rc genhtml_branch_coverage=1 00:10:51.830 --rc genhtml_function_coverage=1 00:10:51.830 --rc genhtml_legend=1 00:10:51.830 --rc geninfo_all_blocks=1 00:10:51.830 --rc geninfo_unexecuted_blocks=1 00:10:51.830 00:10:51.830 ' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.830 --rc genhtml_branch_coverage=1 00:10:51.830 --rc genhtml_function_coverage=1 00:10:51.830 --rc genhtml_legend=1 00:10:51.830 --rc geninfo_all_blocks=1 00:10:51.830 --rc geninfo_unexecuted_blocks=1 00:10:51.830 00:10:51.830 ' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.830 --rc genhtml_branch_coverage=1 00:10:51.830 --rc genhtml_function_coverage=1 00:10:51.830 --rc genhtml_legend=1 00:10:51.830 --rc geninfo_all_blocks=1 00:10:51.830 --rc geninfo_unexecuted_blocks=1 00:10:51.830 00:10:51.830 ' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.830 --rc genhtml_branch_coverage=1 00:10:51.830 --rc genhtml_function_coverage=1 00:10:51.830 --rc genhtml_legend=1 00:10:51.830 --rc geninfo_all_blocks=1 00:10:51.830 --rc geninfo_unexecuted_blocks=1 00:10:51.830 00:10:51.830 ' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.830 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.831 17:47:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.104 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.104 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:57.105 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:57.105 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:57.105 Found net devices under 0000:31:00.0: cvl_0_0 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:57.105 Found net devices under 0000:31:00.1: cvl_0_1 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.105 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:10:57.106 00:10:57.106 --- 10.0.0.2 ping statistics --- 00:10:57.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.106 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:10:57.106 00:10:57.106 --- 10.0.0.1 ping statistics --- 00:10:57.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.106 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2904854 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2904854 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2904854 ']' 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.106 17:47:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.387 [2024-12-06 17:47:44.943661] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:10:57.387 [2024-12-06 17:47:44.943712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.388 [2024-12-06 17:47:45.031156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:57.388 [2024-12-06 17:47:45.083604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.388 [2024-12-06 17:47:45.083654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.388 [2024-12-06 17:47:45.083663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.388 [2024-12-06 17:47:45.083671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.388 [2024-12-06 17:47:45.083677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.388 [2024-12-06 17:47:45.085795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.388 [2024-12-06 17:47:45.085957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.388 [2024-12-06 17:47:45.086114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.388 [2024-12-06 17:47:45.086131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.971 [2024-12-06 17:47:45.760173] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.971 Null1 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.971 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 [2024-12-06 17:47:45.817403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 Null2 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 Null3 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 Null4 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.229 17:47:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:10:58.488 00:10:58.488 Discovery Log Number of Records 6, Generation counter 6 00:10:58.488 =====Discovery Log Entry 0====== 00:10:58.488 trtype: tcp 00:10:58.488 adrfam: ipv4 00:10:58.488 subtype: current discovery subsystem 00:10:58.488 treq: not required 00:10:58.488 portid: 0 00:10:58.488 trsvcid: 4420 00:10:58.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:58.488 traddr: 10.0.0.2 00:10:58.488 eflags: explicit discovery connections, duplicate discovery information 00:10:58.488 sectype: none 00:10:58.488 =====Discovery Log Entry 1====== 00:10:58.488 trtype: tcp 00:10:58.488 adrfam: ipv4 00:10:58.488 subtype: nvme subsystem 00:10:58.488 treq: not required 00:10:58.488 portid: 0 00:10:58.488 trsvcid: 4420 00:10:58.488 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:58.488 traddr: 10.0.0.2 00:10:58.488 eflags: none 00:10:58.488 sectype: none 00:10:58.488 =====Discovery Log Entry 2====== 00:10:58.488 trtype: tcp 00:10:58.488 adrfam: ipv4 00:10:58.488 subtype: nvme subsystem 00:10:58.488 treq: not required 00:10:58.488 portid: 0 00:10:58.488 trsvcid: 4420 00:10:58.488 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:58.488 traddr: 10.0.0.2 00:10:58.488 eflags: none 00:10:58.488 sectype: none 00:10:58.488 =====Discovery Log Entry 3====== 00:10:58.488 trtype: tcp 00:10:58.488 adrfam: ipv4 00:10:58.488 subtype: nvme subsystem 00:10:58.488 treq: not required 00:10:58.488 portid: 0 00:10:58.488 trsvcid: 4420 00:10:58.488 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:58.488 traddr: 10.0.0.2 00:10:58.488 eflags: none 00:10:58.488 sectype: none 00:10:58.488 =====Discovery Log Entry 4====== 00:10:58.488 trtype: tcp 00:10:58.488 adrfam: ipv4 00:10:58.488 subtype: nvme subsystem 00:10:58.488 treq: not required 00:10:58.488 portid: 0 00:10:58.488 trsvcid: 4420 00:10:58.488 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:58.488 traddr: 10.0.0.2 00:10:58.488 eflags: none 00:10:58.488 sectype: none 00:10:58.488 =====Discovery Log Entry 5====== 00:10:58.488 trtype: tcp 00:10:58.488 adrfam: ipv4 00:10:58.488 subtype: discovery subsystem referral 00:10:58.488 treq: not required 00:10:58.488 portid: 0 00:10:58.488 trsvcid: 4430 00:10:58.488 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:58.488 traddr: 10.0.0.2 00:10:58.488 eflags: none 00:10:58.488 sectype: none 00:10:58.488 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:58.488 Perform nvmf subsystem discovery via RPC 00:10:58.488 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:58.488 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.488 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.488 [ 00:10:58.488 { 00:10:58.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:58.488 "subtype": "Discovery", 00:10:58.488 "listen_addresses": [ 00:10:58.488 { 00:10:58.488 "trtype": "TCP", 00:10:58.488 "adrfam": "IPv4", 00:10:58.488 "traddr": "10.0.0.2", 00:10:58.488 "trsvcid": "4420" 00:10:58.488 } 00:10:58.488 ], 00:10:58.488 "allow_any_host": true, 00:10:58.488 "hosts": [] 00:10:58.488 }, 00:10:58.488 { 00:10:58.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:58.488 "subtype": "NVMe", 00:10:58.488 "listen_addresses": [ 00:10:58.488 { 00:10:58.488 "trtype": "TCP", 00:10:58.488 "adrfam": "IPv4", 00:10:58.488 "traddr": "10.0.0.2", 00:10:58.488 "trsvcid": "4420" 00:10:58.488 } 00:10:58.488 ], 00:10:58.488 "allow_any_host": true, 00:10:58.488 "hosts": [], 00:10:58.488 "serial_number": "SPDK00000000000001", 00:10:58.488 "model_number": "SPDK bdev Controller", 00:10:58.488 "max_namespaces": 32, 00:10:58.488 "min_cntlid": 1, 00:10:58.488 "max_cntlid": 65519, 00:10:58.488 "namespaces": [ 00:10:58.488 { 00:10:58.488 "nsid": 1, 00:10:58.488 "bdev_name": "Null1", 00:10:58.488 "name": "Null1", 00:10:58.488 "nguid": "EC498A6A1B044522812A1AAB5EAE0D3B", 00:10:58.488 "uuid": "ec498a6a-1b04-4522-812a-1aab5eae0d3b" 00:10:58.488 } 00:10:58.488 ] 00:10:58.488 }, 00:10:58.488 { 00:10:58.488 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:58.488 "subtype": "NVMe", 00:10:58.488 "listen_addresses": [ 00:10:58.488 { 00:10:58.488 "trtype": "TCP", 00:10:58.488 "adrfam": "IPv4", 00:10:58.488 "traddr": "10.0.0.2", 00:10:58.488 "trsvcid": "4420" 00:10:58.488 } 00:10:58.488 ], 00:10:58.488 "allow_any_host": true, 00:10:58.488 "hosts": [], 00:10:58.488 "serial_number": "SPDK00000000000002", 00:10:58.488 "model_number": "SPDK bdev Controller", 00:10:58.488 "max_namespaces": 32, 00:10:58.488 "min_cntlid": 1, 00:10:58.488 "max_cntlid": 65519, 00:10:58.488 "namespaces": [ 00:10:58.488 { 00:10:58.488 "nsid": 1, 00:10:58.488 "bdev_name": "Null2", 00:10:58.488 "name": "Null2", 00:10:58.488 "nguid": "C27AAB835B614881A5897BB830671E93", 00:10:58.488 "uuid": "c27aab83-5b61-4881-a589-7bb830671e93" 00:10:58.488 } 00:10:58.488 ] 00:10:58.488 }, 00:10:58.488 { 00:10:58.489 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:58.489 "subtype": "NVMe", 00:10:58.489 "listen_addresses": [ 00:10:58.489 { 00:10:58.489 "trtype": "TCP", 00:10:58.489 "adrfam": "IPv4", 00:10:58.489 "traddr": "10.0.0.2", 00:10:58.489 "trsvcid": "4420" 00:10:58.489 } 00:10:58.489 ], 00:10:58.489 "allow_any_host": true, 00:10:58.489 "hosts": [], 00:10:58.489 "serial_number": "SPDK00000000000003", 00:10:58.489 "model_number": "SPDK bdev Controller", 00:10:58.489 "max_namespaces": 32, 00:10:58.489 "min_cntlid": 1, 00:10:58.489 "max_cntlid": 65519, 00:10:58.489 "namespaces": [ 00:10:58.489 { 00:10:58.489 "nsid": 1, 00:10:58.489 "bdev_name": "Null3", 00:10:58.489 "name": "Null3", 00:10:58.489 "nguid": "B29EDB0E88C5419AB3529CB72A3113F7", 00:10:58.489 "uuid": "b29edb0e-88c5-419a-b352-9cb72a3113f7" 00:10:58.489 } 00:10:58.489 ] 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:58.489 "subtype": "NVMe", 00:10:58.489 "listen_addresses": [ 00:10:58.489 { 00:10:58.489 "trtype": "TCP", 00:10:58.489 "adrfam": "IPv4", 00:10:58.489 "traddr": "10.0.0.2", 00:10:58.489 "trsvcid": "4420" 00:10:58.489 } 00:10:58.489 ], 00:10:58.489 "allow_any_host": true, 00:10:58.489 "hosts": [], 00:10:58.489 "serial_number": "SPDK00000000000004", 00:10:58.489 "model_number": "SPDK bdev Controller", 00:10:58.489 "max_namespaces": 32, 00:10:58.489 "min_cntlid": 1, 00:10:58.489 "max_cntlid": 65519, 00:10:58.489 "namespaces": [ 00:10:58.489 { 00:10:58.489 "nsid": 1, 00:10:58.489 "bdev_name": "Null4", 00:10:58.489 "name": "Null4", 00:10:58.489 "nguid": "1A21A34B1A5A408F80609638F8E3983E", 00:10:58.489 "uuid": "1a21a34b-1a5a-408f-8060-9638f8e3983e" 00:10:58.489 } 00:10:58.489 ] 00:10:58.489 } 00:10:58.489 ] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.489 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.489 rmmod nvme_tcp 00:10:58.489 rmmod nvme_fabrics 00:10:58.746 rmmod nvme_keyring 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2904854 ']' 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2904854 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2904854 ']' 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2904854 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904854 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904854' 00:10:58.746 killing process with pid 2904854 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2904854 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2904854 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.746 17:47:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:01.276 00:11:01.276 real 0m9.349s 00:11:01.276 user 0m7.246s 00:11:01.276 sys 0m4.603s 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.276 ************************************ 00:11:01.276 END TEST nvmf_target_discovery 00:11:01.276 ************************************ 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:01.276 ************************************ 00:11:01.276 START TEST nvmf_referrals 00:11:01.276 ************************************ 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:01.276 * Looking for test storage... 00:11:01.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.276 --rc genhtml_branch_coverage=1 00:11:01.276 --rc genhtml_function_coverage=1 00:11:01.276 --rc genhtml_legend=1 00:11:01.276 --rc geninfo_all_blocks=1 00:11:01.276 --rc geninfo_unexecuted_blocks=1 00:11:01.276 00:11:01.276 ' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.276 --rc genhtml_branch_coverage=1 00:11:01.276 --rc genhtml_function_coverage=1 00:11:01.276 --rc genhtml_legend=1 00:11:01.276 --rc geninfo_all_blocks=1 00:11:01.276 --rc geninfo_unexecuted_blocks=1 00:11:01.276 00:11:01.276 ' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.276 --rc genhtml_branch_coverage=1 00:11:01.276 --rc genhtml_function_coverage=1 00:11:01.276 --rc genhtml_legend=1 00:11:01.276 --rc geninfo_all_blocks=1 00:11:01.276 --rc geninfo_unexecuted_blocks=1 00:11:01.276 00:11:01.276 ' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.276 --rc genhtml_branch_coverage=1 00:11:01.276 --rc genhtml_function_coverage=1 00:11:01.276 --rc genhtml_legend=1 00:11:01.276 --rc geninfo_all_blocks=1 00:11:01.276 --rc geninfo_unexecuted_blocks=1 00:11:01.276 00:11:01.276 ' 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.276 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:01.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:01.277 17:47:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.560 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:06.560 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:06.560 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:06.560 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:06.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:06.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:06.561 Found net devices under 0000:31:00.0: cvl_0_0 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:06.561 Found net devices under 0000:31:00.1: cvl_0_1 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:06.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:11:06.561 00:11:06.561 --- 10.0.0.2 ping statistics --- 00:11:06.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.561 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:06.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:11:06.561 00:11:06.561 --- 10.0.0.1 ping statistics --- 00:11:06.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.561 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.561 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.562 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.562 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.562 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.562 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.562 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2909554 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2909554 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2909554 ']' 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.822 17:47:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.822 [2024-12-06 17:47:54.435363] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:11:06.822 [2024-12-06 17:47:54.435429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.822 [2024-12-06 17:47:54.525531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.822 [2024-12-06 17:47:54.562738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.822 [2024-12-06 17:47:54.562770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.822 [2024-12-06 17:47:54.562778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:06.822 [2024-12-06 17:47:54.562785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:06.822 [2024-12-06 17:47:54.562791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.822 [2024-12-06 17:47:54.564238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.822 [2024-12-06 17:47:54.564413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.822 [2024-12-06 17:47:54.564561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.822 [2024-12-06 17:47:54.564563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 [2024-12-06 17:47:55.272293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 [2024-12-06 17:47:55.300480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:07.763 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.022 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.281 17:47:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.540 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.798 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.799 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.058 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:09.318 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:09.318 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:09.318 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:09.318 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:09.318 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.318 17:47:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:09.578 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.837 rmmod nvme_tcp 00:11:09.837 rmmod nvme_fabrics 00:11:09.837 rmmod nvme_keyring 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2909554 ']' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2909554 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2909554 ']' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2909554 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909554 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909554' 00:11:09.837 killing process with pid 2909554 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2909554 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2909554 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.837 17:47:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.376 00:11:12.376 real 0m11.069s 00:11:12.376 user 0m14.272s 00:11:12.376 sys 0m5.012s 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.376 ************************************ 00:11:12.376 END TEST nvmf_referrals 00:11:12.376 ************************************ 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.376 ************************************ 00:11:12.376 START TEST nvmf_connect_disconnect 00:11:12.376 ************************************ 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:12.376 * Looking for test storage... 00:11:12.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.376 --rc genhtml_branch_coverage=1 00:11:12.376 --rc genhtml_function_coverage=1 00:11:12.376 --rc genhtml_legend=1 00:11:12.376 --rc geninfo_all_blocks=1 00:11:12.376 --rc geninfo_unexecuted_blocks=1 00:11:12.376 00:11:12.376 ' 00:11:12.376 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.376 --rc genhtml_branch_coverage=1 00:11:12.376 --rc genhtml_function_coverage=1 00:11:12.376 --rc genhtml_legend=1 00:11:12.376 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.377 --rc genhtml_branch_coverage=1 00:11:12.377 --rc genhtml_function_coverage=1 00:11:12.377 --rc genhtml_legend=1 00:11:12.377 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.377 --rc genhtml_branch_coverage=1 00:11:12.377 --rc genhtml_function_coverage=1 00:11:12.377 --rc genhtml_legend=1 00:11:12.377 --rc geninfo_all_blocks=1 00:11:12.377 --rc geninfo_unexecuted_blocks=1 00:11:12.377 00:11:12.377 ' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.377 17:47:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:17.651 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:17.651 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:17.651 Found net devices under 0000:31:00.0: cvl_0_0 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:17.651 Found net devices under 0000:31:00.1: cvl_0_1 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.651 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:11:17.652 00:11:17.652 --- 10.0.0.2 ping statistics --- 00:11:17.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.652 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:11:17.652 00:11:17.652 --- 10.0.0.1 ping statistics --- 00:11:17.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.652 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2914784 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2914784 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2914784 ']' 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.652 17:48:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:17.652 [2024-12-06 17:48:05.340457] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:11:17.652 [2024-12-06 17:48:05.340508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.652 [2024-12-06 17:48:05.426666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.652 [2024-12-06 17:48:05.463810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.652 [2024-12-06 17:48:05.463844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.652 [2024-12-06 17:48:05.463853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.652 [2024-12-06 17:48:05.463863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.652 [2024-12-06 17:48:05.463869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.652 [2024-12-06 17:48:05.465420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.652 [2024-12-06 17:48:05.465571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.652 [2024-12-06 17:48:05.465721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.652 [2024-12-06 17:48:05.465722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.589 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.590 [2024-12-06 17:48:06.149052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.590 [2024-12-06 17:48:06.204690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:18.590 17:48:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:22.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.066 rmmod nvme_tcp 00:11:37.066 rmmod nvme_fabrics 00:11:37.066 rmmod nvme_keyring 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2914784 ']' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2914784 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2914784 ']' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2914784 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914784 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914784' 00:11:37.066 killing process with pid 2914784 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2914784 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2914784 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.066 17:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.971 00:11:38.971 real 0m26.699s 00:11:38.971 user 1m16.652s 00:11:38.971 sys 0m5.296s 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:38.971 ************************************ 00:11:38.971 END TEST nvmf_connect_disconnect 00:11:38.971 ************************************ 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.971 ************************************ 00:11:38.971 START TEST nvmf_multitarget 00:11:38.971 ************************************ 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:38.971 * Looking for test storage... 00:11:38.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.971 --rc genhtml_branch_coverage=1 00:11:38.971 --rc genhtml_function_coverage=1 00:11:38.971 --rc genhtml_legend=1 00:11:38.971 --rc geninfo_all_blocks=1 00:11:38.971 --rc geninfo_unexecuted_blocks=1 00:11:38.971 00:11:38.971 ' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.971 --rc genhtml_branch_coverage=1 00:11:38.971 --rc genhtml_function_coverage=1 00:11:38.971 --rc genhtml_legend=1 00:11:38.971 --rc geninfo_all_blocks=1 00:11:38.971 --rc geninfo_unexecuted_blocks=1 00:11:38.971 00:11:38.971 ' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.971 --rc genhtml_branch_coverage=1 00:11:38.971 --rc genhtml_function_coverage=1 00:11:38.971 --rc genhtml_legend=1 00:11:38.971 --rc geninfo_all_blocks=1 00:11:38.971 --rc geninfo_unexecuted_blocks=1 00:11:38.971 00:11:38.971 ' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.971 --rc genhtml_branch_coverage=1 00:11:38.971 --rc genhtml_function_coverage=1 00:11:38.971 --rc genhtml_legend=1 00:11:38.971 --rc geninfo_all_blocks=1 00:11:38.971 --rc geninfo_unexecuted_blocks=1 00:11:38.971 00:11:38.971 ' 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:38.971 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.972 17:48:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:44.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:44.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:44.253 Found net devices under 0000:31:00.0: cvl_0_0 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:44.253 Found net devices under 0000:31:00.1: cvl_0_1 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:44.253 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:11:44.254 00:11:44.254 --- 10.0.0.2 ping statistics --- 00:11:44.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.254 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:11:44.254 00:11:44.254 --- 10.0.0.1 ping statistics --- 00:11:44.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.254 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.254 17:48:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2923999 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2923999 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2923999 ']' 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:44.254 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.254 [2024-12-06 17:48:32.054249] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:11:44.254 [2024-12-06 17:48:32.054319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.514 [2024-12-06 17:48:32.146331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.514 [2024-12-06 17:48:32.199537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.514 [2024-12-06 17:48:32.199593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.514 [2024-12-06 17:48:32.199602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.514 [2024-12-06 17:48:32.199610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.514 [2024-12-06 17:48:32.199616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.515 [2024-12-06 17:48:32.202005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.515 [2024-12-06 17:48:32.202173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.515 [2024-12-06 17:48:32.202234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.515 [2024-12-06 17:48:32.202235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.082 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:45.339 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:45.339 17:48:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:45.339 "nvmf_tgt_1" 00:11:45.339 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:45.339 "nvmf_tgt_2" 00:11:45.339 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:45.339 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.597 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:45.597 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:45.597 true 00:11:45.597 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:45.597 true 00:11:45.597 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:45.597 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.857 rmmod nvme_tcp 00:11:45.857 rmmod nvme_fabrics 00:11:45.857 rmmod nvme_keyring 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2923999 ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2923999 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2923999 ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2923999 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2923999 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2923999' 00:11:45.857 killing process with pid 2923999 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2923999 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2923999 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.857 17:48:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:48.396 00:11:48.396 real 0m9.246s 00:11:48.396 user 0m7.993s 00:11:48.396 sys 0m4.417s 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 ************************************ 00:11:48.396 END TEST nvmf_multitarget 00:11:48.396 ************************************ 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.396 17:48:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 ************************************ 00:11:48.396 START TEST nvmf_rpc 00:11:48.396 ************************************ 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:48.397 * Looking for test storage... 00:11:48.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.397 --rc genhtml_branch_coverage=1 00:11:48.397 --rc genhtml_function_coverage=1 00:11:48.397 --rc genhtml_legend=1 00:11:48.397 --rc geninfo_all_blocks=1 00:11:48.397 --rc geninfo_unexecuted_blocks=1 00:11:48.397 00:11:48.397 ' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.397 --rc genhtml_branch_coverage=1 00:11:48.397 --rc genhtml_function_coverage=1 00:11:48.397 --rc genhtml_legend=1 00:11:48.397 --rc geninfo_all_blocks=1 00:11:48.397 --rc geninfo_unexecuted_blocks=1 00:11:48.397 00:11:48.397 ' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.397 --rc genhtml_branch_coverage=1 00:11:48.397 --rc genhtml_function_coverage=1 00:11:48.397 --rc genhtml_legend=1 00:11:48.397 --rc geninfo_all_blocks=1 00:11:48.397 --rc geninfo_unexecuted_blocks=1 00:11:48.397 00:11:48.397 ' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:48.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.397 --rc genhtml_branch_coverage=1 00:11:48.397 --rc genhtml_function_coverage=1 00:11:48.397 --rc genhtml_legend=1 00:11:48.397 --rc geninfo_all_blocks=1 00:11:48.397 --rc geninfo_unexecuted_blocks=1 00:11:48.397 00:11:48.397 ' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:48.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:48.397 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:48.398 17:48:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.679 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.679 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:53.680 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:53.680 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:53.680 Found net devices under 0000:31:00.0: cvl_0_0 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:53.680 Found net devices under 0000:31:00.1: cvl_0_1 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.680 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.681 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.681 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.681 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:11:53.940 00:11:53.940 --- 10.0.0.2 ping statistics --- 00:11:53.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.940 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:11:53.940 00:11:53.940 --- 10.0.0.1 ping statistics --- 00:11:53.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.940 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2928715 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2928715 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2928715 ']' 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.940 17:48:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.200 [2024-12-06 17:48:41.784790] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:11:54.200 [2024-12-06 17:48:41.784858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.200 [2024-12-06 17:48:41.876638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.200 [2024-12-06 17:48:41.930240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.200 [2024-12-06 17:48:41.930294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.200 [2024-12-06 17:48:41.930303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.200 [2024-12-06 17:48:41.930311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.200 [2024-12-06 17:48:41.930317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.200 [2024-12-06 17:48:41.932431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.200 [2024-12-06 17:48:41.932591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.200 [2024-12-06 17:48:41.932753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.200 [2024-12-06 17:48:41.932752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.770 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.770 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:54.770 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:54.770 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.770 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:55.032 "tick_rate": 2400000000, 00:11:55.032 "poll_groups": [ 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_000", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [] 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_001", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [] 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_002", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [] 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_003", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [] 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 }' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 [2024-12-06 17:48:42.702427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:55.032 "tick_rate": 2400000000, 00:11:55.032 "poll_groups": [ 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_000", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [ 00:11:55.032 { 00:11:55.032 "trtype": "TCP" 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_001", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [ 00:11:55.032 { 00:11:55.032 "trtype": "TCP" 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_002", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [ 00:11:55.032 { 00:11:55.032 "trtype": "TCP" 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 }, 00:11:55.032 { 00:11:55.032 "name": "nvmf_tgt_poll_group_003", 00:11:55.032 "admin_qpairs": 0, 00:11:55.032 "io_qpairs": 0, 00:11:55.032 "current_admin_qpairs": 0, 00:11:55.032 "current_io_qpairs": 0, 00:11:55.032 "pending_bdev_io": 0, 00:11:55.032 "completed_nvme_io": 0, 00:11:55.032 "transports": [ 00:11:55.032 { 00:11:55.032 "trtype": "TCP" 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 } 00:11:55.032 ] 00:11:55.032 }' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 Malloc1 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.032 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.033 [2024-12-06 17:48:42.848107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.033 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:11:55.294 [2024-12-06 17:48:42.870876] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:11:55.294 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:55.294 could not add new controller: failed to write to nvme-fabrics device 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.294 17:48:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.675 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.675 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:56.675 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.675 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:56.675 17:48:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:58.580 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.840 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.841 [2024-12-06 17:48:46.534105] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:11:58.841 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:58.841 could not add new controller: failed to write to nvme-fabrics device 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.841 17:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.748 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.748 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.748 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.748 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.748 17:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 [2024-12-06 17:48:50.290696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.656 17:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.037 17:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.037 17:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.037 17:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.037 17:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:04.037 17:48:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 [2024-12-06 17:48:53.947322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.573 17:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.951 17:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.951 17:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.951 17:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.952 17:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.952 17:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:09.858 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.117 [2024-12-06 17:48:57.730929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.117 17:48:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.497 17:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.497 17:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:11.497 17:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.497 17:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:11.497 17:48:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.055 [2024-12-06 17:49:01.414767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.055 17:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.437 17:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.437 17:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:15.437 17:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.437 17:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:15.437 17:49:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:17.340 17:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.340 [2024-12-06 17:49:05.105905] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.340 17:49:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.255 17:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.255 17:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.255 17:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.255 17:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:19.255 17:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:21.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 [2024-12-06 17:49:08.805719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 [2024-12-06 17:49:08.853834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.274 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 [2024-12-06 17:49:08.901942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 [2024-12-06 17:49:08.950080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 [2024-12-06 17:49:08.998230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.275 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:21.275 "tick_rate": 2400000000, 00:12:21.275 "poll_groups": [ 00:12:21.275 { 00:12:21.275 "name": "nvmf_tgt_poll_group_000", 00:12:21.275 "admin_qpairs": 0, 00:12:21.275 "io_qpairs": 224, 00:12:21.275 "current_admin_qpairs": 0, 00:12:21.275 "current_io_qpairs": 0, 00:12:21.275 "pending_bdev_io": 0, 00:12:21.275 "completed_nvme_io": 307, 00:12:21.275 "transports": [ 00:12:21.275 { 00:12:21.275 "trtype": "TCP" 00:12:21.275 } 00:12:21.275 ] 00:12:21.275 }, 00:12:21.275 { 00:12:21.275 "name": "nvmf_tgt_poll_group_001", 00:12:21.275 "admin_qpairs": 1, 00:12:21.275 "io_qpairs": 223, 00:12:21.275 "current_admin_qpairs": 0, 00:12:21.275 "current_io_qpairs": 0, 00:12:21.276 "pending_bdev_io": 0, 00:12:21.276 "completed_nvme_io": 273, 00:12:21.276 "transports": [ 00:12:21.276 { 00:12:21.276 "trtype": "TCP" 00:12:21.276 } 00:12:21.276 ] 00:12:21.276 }, 00:12:21.276 { 00:12:21.276 "name": "nvmf_tgt_poll_group_002", 00:12:21.276 "admin_qpairs": 6, 00:12:21.276 "io_qpairs": 218, 00:12:21.276 "current_admin_qpairs": 0, 00:12:21.276 "current_io_qpairs": 0, 00:12:21.276 "pending_bdev_io": 0, 00:12:21.276 "completed_nvme_io": 435, 00:12:21.276 "transports": [ 00:12:21.276 { 00:12:21.276 "trtype": "TCP" 00:12:21.276 } 00:12:21.276 ] 00:12:21.276 }, 00:12:21.276 { 00:12:21.276 "name": "nvmf_tgt_poll_group_003", 00:12:21.276 "admin_qpairs": 0, 00:12:21.276 "io_qpairs": 224, 00:12:21.276 "current_admin_qpairs": 0, 00:12:21.276 "current_io_qpairs": 0, 00:12:21.276 "pending_bdev_io": 0, 00:12:21.276 "completed_nvme_io": 224, 00:12:21.276 "transports": [ 00:12:21.276 { 00:12:21.276 "trtype": "TCP" 00:12:21.276 } 00:12:21.276 ] 00:12:21.276 } 00:12:21.276 ] 00:12:21.276 }' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:21.276 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:21.557 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.558 rmmod nvme_tcp 00:12:21.558 rmmod nvme_fabrics 00:12:21.558 rmmod nvme_keyring 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2928715 ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2928715 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2928715 ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2928715 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2928715 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2928715' 00:12:21.558 killing process with pid 2928715 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2928715 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2928715 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.558 17:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.097 00:12:24.097 real 0m35.601s 00:12:24.097 user 1m50.629s 00:12:24.097 sys 0m6.267s 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.097 ************************************ 00:12:24.097 END TEST nvmf_rpc 00:12:24.097 ************************************ 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.097 ************************************ 00:12:24.097 START TEST nvmf_invalid 00:12:24.097 ************************************ 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:24.097 * Looking for test storage... 00:12:24.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.097 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.098 --rc genhtml_branch_coverage=1 00:12:24.098 --rc genhtml_function_coverage=1 00:12:24.098 --rc genhtml_legend=1 00:12:24.098 --rc geninfo_all_blocks=1 00:12:24.098 --rc geninfo_unexecuted_blocks=1 00:12:24.098 00:12:24.098 ' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.098 --rc genhtml_branch_coverage=1 00:12:24.098 --rc genhtml_function_coverage=1 00:12:24.098 --rc genhtml_legend=1 00:12:24.098 --rc geninfo_all_blocks=1 00:12:24.098 --rc geninfo_unexecuted_blocks=1 00:12:24.098 00:12:24.098 ' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.098 --rc genhtml_branch_coverage=1 00:12:24.098 --rc genhtml_function_coverage=1 00:12:24.098 --rc genhtml_legend=1 00:12:24.098 --rc geninfo_all_blocks=1 00:12:24.098 --rc geninfo_unexecuted_blocks=1 00:12:24.098 00:12:24.098 ' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.098 --rc genhtml_branch_coverage=1 00:12:24.098 --rc genhtml_function_coverage=1 00:12:24.098 --rc genhtml_legend=1 00:12:24.098 --rc geninfo_all_blocks=1 00:12:24.098 --rc geninfo_unexecuted_blocks=1 00:12:24.098 00:12:24.098 ' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:24.098 17:49:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.371 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:29.372 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:29.372 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:29.372 Found net devices under 0000:31:00.0: cvl_0_0 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:29.372 Found net devices under 0000:31:00.1: cvl_0_1 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:12:29.372 00:12:29.372 --- 10.0.0.2 ping statistics --- 00:12:29.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.372 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:12:29.372 00:12:29.372 --- 10.0.0.1 ping statistics --- 00:12:29.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.372 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2939228 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2939228 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2939228 ']' 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:29.372 17:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.372 [2024-12-06 17:49:16.983287] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:12:29.373 [2024-12-06 17:49:16.983338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.373 [2024-12-06 17:49:17.067676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.373 [2024-12-06 17:49:17.104488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.373 [2024-12-06 17:49:17.104521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.373 [2024-12-06 17:49:17.104530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.373 [2024-12-06 17:49:17.104537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.373 [2024-12-06 17:49:17.104543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.373 [2024-12-06 17:49:17.106225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.373 [2024-12-06 17:49:17.106350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.373 [2024-12-06 17:49:17.106500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.373 [2024-12-06 17:49:17.106501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.939 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.939 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:29.939 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.939 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.939 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12596 00:12:30.198 [2024-12-06 17:49:17.925658] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:30.198 { 00:12:30.198 "nqn": "nqn.2016-06.io.spdk:cnode12596", 00:12:30.198 "tgt_name": "foobar", 00:12:30.198 "method": "nvmf_create_subsystem", 00:12:30.198 "req_id": 1 00:12:30.198 } 00:12:30.198 Got JSON-RPC error response 00:12:30.198 response: 00:12:30.198 { 00:12:30.198 "code": -32603, 00:12:30.198 "message": "Unable to find target foobar" 00:12:30.198 }' 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:30.198 { 00:12:30.198 "nqn": "nqn.2016-06.io.spdk:cnode12596", 00:12:30.198 "tgt_name": "foobar", 00:12:30.198 "method": "nvmf_create_subsystem", 00:12:30.198 "req_id": 1 00:12:30.198 } 00:12:30.198 Got JSON-RPC error response 00:12:30.198 response: 00:12:30.198 { 00:12:30.198 "code": -32603, 00:12:30.198 "message": "Unable to find target foobar" 00:12:30.198 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:30.198 17:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20672 00:12:30.456 [2024-12-06 17:49:18.090221] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20672: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:30.457 { 00:12:30.457 "nqn": "nqn.2016-06.io.spdk:cnode20672", 00:12:30.457 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:30.457 "method": "nvmf_create_subsystem", 00:12:30.457 "req_id": 1 00:12:30.457 } 00:12:30.457 Got JSON-RPC error response 00:12:30.457 response: 00:12:30.457 { 00:12:30.457 "code": -32602, 00:12:30.457 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:30.457 }' 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:30.457 { 00:12:30.457 "nqn": "nqn.2016-06.io.spdk:cnode20672", 00:12:30.457 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:30.457 "method": "nvmf_create_subsystem", 00:12:30.457 "req_id": 1 00:12:30.457 } 00:12:30.457 Got JSON-RPC error response 00:12:30.457 response: 00:12:30.457 { 00:12:30.457 "code": -32602, 00:12:30.457 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:30.457 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16443 00:12:30.457 [2024-12-06 17:49:18.254788] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16443: invalid model number 'SPDK_Controller' 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:30.457 { 00:12:30.457 "nqn": "nqn.2016-06.io.spdk:cnode16443", 00:12:30.457 "model_number": "SPDK_Controller\u001f", 00:12:30.457 "method": "nvmf_create_subsystem", 00:12:30.457 "req_id": 1 00:12:30.457 } 00:12:30.457 Got JSON-RPC error response 00:12:30.457 response: 00:12:30.457 { 00:12:30.457 "code": -32602, 00:12:30.457 "message": "Invalid MN SPDK_Controller\u001f" 00:12:30.457 }' 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:30.457 { 00:12:30.457 "nqn": "nqn.2016-06.io.spdk:cnode16443", 00:12:30.457 "model_number": "SPDK_Controller\u001f", 00:12:30.457 "method": "nvmf_create_subsystem", 00:12:30.457 "req_id": 1 00:12:30.457 } 00:12:30.457 Got JSON-RPC error response 00:12:30.457 response: 00:12:30.457 { 00:12:30.457 "code": -32602, 00:12:30.457 "message": "Invalid MN SPDK_Controller\u001f" 00:12:30.457 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.457 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.717 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Br:ohG"|C1g[~zln-IRX~' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Br:ohG"|C1g[~zln-IRX~' nqn.2016-06.io.spdk:cnode17926 00:12:30.718 [2024-12-06 17:49:18.507571] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17926: invalid serial number 'Br:ohG"|C1g[~zln-IRX~' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:30.718 { 00:12:30.718 "nqn": "nqn.2016-06.io.spdk:cnode17926", 00:12:30.718 "serial_number": "Br:ohG\"|C1g[~zln-IRX~", 00:12:30.718 "method": "nvmf_create_subsystem", 00:12:30.718 "req_id": 1 00:12:30.718 } 00:12:30.718 Got JSON-RPC error response 00:12:30.718 response: 00:12:30.718 { 00:12:30.718 "code": -32602, 00:12:30.718 "message": "Invalid SN Br:ohG\"|C1g[~zln-IRX~" 00:12:30.718 }' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:30.718 { 00:12:30.718 "nqn": "nqn.2016-06.io.spdk:cnode17926", 00:12:30.718 "serial_number": "Br:ohG\"|C1g[~zln-IRX~", 00:12:30.718 "method": "nvmf_create_subsystem", 00:12:30.718 "req_id": 1 00:12:30.718 } 00:12:30.718 Got JSON-RPC error response 00:12:30.718 response: 00:12:30.718 { 00:12:30.718 "code": -32602, 00:12:30.718 "message": "Invalid SN Br:ohG\"|C1g[~zln-IRX~" 00:12:30.718 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:30.718 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.978 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:30.979 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p'\''wPIOT-sGlr>zLDUIaW!YIsq+f*"9/UBP\Om#OlY' 00:12:30.980 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'p'\''wPIOT-sGlr>zLDUIaW!YIsq+f*"9/UBP\Om#OlY' nqn.2016-06.io.spdk:cnode32637 00:12:31.239 [2024-12-06 17:49:18.860712] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32637: invalid model number 'p'wPIOT-sGlr>zLDUIaW!YIsq+f*"9/UBP\Om#OlY' 00:12:31.239 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:31.239 { 00:12:31.239 "nqn": "nqn.2016-06.io.spdk:cnode32637", 00:12:31.239 "model_number": "p'\''wPIOT-sGlr>zLDUIaW!YIsq+f*\"9/UBP\\Om#OlY", 00:12:31.239 "method": "nvmf_create_subsystem", 00:12:31.239 "req_id": 1 00:12:31.239 } 00:12:31.239 Got JSON-RPC error response 00:12:31.239 response: 00:12:31.239 { 00:12:31.239 "code": -32602, 00:12:31.239 "message": "Invalid MN p'\''wPIOT-sGlr>zLDUIaW!YIsq+f*\"9/UBP\\Om#OlY" 00:12:31.239 }' 00:12:31.239 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:31.239 { 00:12:31.239 "nqn": "nqn.2016-06.io.spdk:cnode32637", 00:12:31.239 "model_number": "p'wPIOT-sGlr>zLDUIaW!YIsq+f*\"9/UBP\\Om#OlY", 00:12:31.239 "method": "nvmf_create_subsystem", 00:12:31.239 "req_id": 1 00:12:31.239 } 00:12:31.239 Got JSON-RPC error response 00:12:31.239 response: 00:12:31.239 { 00:12:31.239 "code": -32602, 00:12:31.239 "message": "Invalid MN p'wPIOT-sGlr>zLDUIaW!YIsq+f*\"9/UBP\\Om#OlY" 00:12:31.239 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:31.239 17:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:31.239 [2024-12-06 17:49:19.021344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.239 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:31.498 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:31.498 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:31.499 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:31.499 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:31.499 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:31.757 [2024-12-06 17:49:19.347359] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:31.757 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:31.757 { 00:12:31.757 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:31.757 "listen_address": { 00:12:31.757 "trtype": "tcp", 00:12:31.757 "traddr": "", 00:12:31.757 "trsvcid": "4421" 00:12:31.757 }, 00:12:31.757 "method": "nvmf_subsystem_remove_listener", 00:12:31.757 "req_id": 1 00:12:31.757 } 00:12:31.757 Got JSON-RPC error response 00:12:31.757 response: 00:12:31.757 { 00:12:31.757 "code": -32602, 00:12:31.757 "message": "Invalid parameters" 00:12:31.757 }' 00:12:31.757 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:31.757 { 00:12:31.757 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:31.757 "listen_address": { 00:12:31.757 "trtype": "tcp", 00:12:31.757 "traddr": "", 00:12:31.757 "trsvcid": "4421" 00:12:31.757 }, 00:12:31.757 "method": "nvmf_subsystem_remove_listener", 00:12:31.757 "req_id": 1 00:12:31.757 } 00:12:31.757 Got JSON-RPC error response 00:12:31.757 response: 00:12:31.757 { 00:12:31.757 "code": -32602, 00:12:31.757 "message": "Invalid parameters" 00:12:31.757 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:31.757 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31825 -i 0 00:12:31.757 [2024-12-06 17:49:19.507819] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31825: invalid cntlid range [0-65519] 00:12:31.757 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:31.757 { 00:12:31.757 "nqn": "nqn.2016-06.io.spdk:cnode31825", 00:12:31.757 "min_cntlid": 0, 00:12:31.757 "method": "nvmf_create_subsystem", 00:12:31.757 "req_id": 1 00:12:31.757 } 00:12:31.757 Got JSON-RPC error response 00:12:31.757 response: 00:12:31.757 { 00:12:31.757 "code": -32602, 00:12:31.757 "message": "Invalid cntlid range [0-65519]" 00:12:31.757 }' 00:12:31.757 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:31.757 { 00:12:31.757 "nqn": "nqn.2016-06.io.spdk:cnode31825", 00:12:31.757 "min_cntlid": 0, 00:12:31.758 "method": "nvmf_create_subsystem", 00:12:31.758 "req_id": 1 00:12:31.758 } 00:12:31.758 Got JSON-RPC error response 00:12:31.758 response: 00:12:31.758 { 00:12:31.758 "code": -32602, 00:12:31.758 "message": "Invalid cntlid range [0-65519]" 00:12:31.758 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:31.758 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6315 -i 65520 00:12:32.017 [2024-12-06 17:49:19.672307] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6315: invalid cntlid range [65520-65519] 00:12:32.017 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:32.017 { 00:12:32.017 "nqn": "nqn.2016-06.io.spdk:cnode6315", 00:12:32.017 "min_cntlid": 65520, 00:12:32.017 "method": "nvmf_create_subsystem", 00:12:32.017 "req_id": 1 00:12:32.017 } 00:12:32.017 Got JSON-RPC error response 00:12:32.017 response: 00:12:32.017 { 00:12:32.017 "code": -32602, 00:12:32.017 "message": "Invalid cntlid range [65520-65519]" 00:12:32.017 }' 00:12:32.017 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:32.017 { 00:12:32.017 "nqn": "nqn.2016-06.io.spdk:cnode6315", 00:12:32.017 "min_cntlid": 65520, 00:12:32.017 "method": "nvmf_create_subsystem", 00:12:32.017 "req_id": 1 00:12:32.017 } 00:12:32.017 Got JSON-RPC error response 00:12:32.017 response: 00:12:32.017 { 00:12:32.017 "code": -32602, 00:12:32.017 "message": "Invalid cntlid range [65520-65519]" 00:12:32.017 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:32.017 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18036 -I 0 00:12:32.017 [2024-12-06 17:49:19.832794] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18036: invalid cntlid range [1-0] 00:12:32.276 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:32.276 { 00:12:32.276 "nqn": "nqn.2016-06.io.spdk:cnode18036", 00:12:32.276 "max_cntlid": 0, 00:12:32.276 "method": "nvmf_create_subsystem", 00:12:32.276 "req_id": 1 00:12:32.276 } 00:12:32.276 Got JSON-RPC error response 00:12:32.276 response: 00:12:32.276 { 00:12:32.276 "code": -32602, 00:12:32.276 "message": "Invalid cntlid range [1-0]" 00:12:32.276 }' 00:12:32.276 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:32.276 { 00:12:32.276 "nqn": "nqn.2016-06.io.spdk:cnode18036", 00:12:32.276 "max_cntlid": 0, 00:12:32.276 "method": "nvmf_create_subsystem", 00:12:32.276 "req_id": 1 00:12:32.276 } 00:12:32.276 Got JSON-RPC error response 00:12:32.276 response: 00:12:32.276 { 00:12:32.276 "code": -32602, 00:12:32.276 "message": "Invalid cntlid range [1-0]" 00:12:32.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:32.276 17:49:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2516 -I 65520 00:12:32.276 [2024-12-06 17:49:19.993289] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2516: invalid cntlid range [1-65520] 00:12:32.276 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:32.276 { 00:12:32.276 "nqn": "nqn.2016-06.io.spdk:cnode2516", 00:12:32.276 "max_cntlid": 65520, 00:12:32.276 "method": "nvmf_create_subsystem", 00:12:32.276 "req_id": 1 00:12:32.277 } 00:12:32.277 Got JSON-RPC error response 00:12:32.277 response: 00:12:32.277 { 00:12:32.277 "code": -32602, 00:12:32.277 "message": "Invalid cntlid range [1-65520]" 00:12:32.277 }' 00:12:32.277 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:32.277 { 00:12:32.277 "nqn": "nqn.2016-06.io.spdk:cnode2516", 00:12:32.277 "max_cntlid": 65520, 00:12:32.277 "method": "nvmf_create_subsystem", 00:12:32.277 "req_id": 1 00:12:32.277 } 00:12:32.277 Got JSON-RPC error response 00:12:32.277 response: 00:12:32.277 { 00:12:32.277 "code": -32602, 00:12:32.277 "message": "Invalid cntlid range [1-65520]" 00:12:32.277 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:32.277 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21683 -i 6 -I 5 00:12:32.536 [2024-12-06 17:49:20.157838] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21683: invalid cntlid range [6-5] 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:32.536 { 00:12:32.536 "nqn": "nqn.2016-06.io.spdk:cnode21683", 00:12:32.536 "min_cntlid": 6, 00:12:32.536 "max_cntlid": 5, 00:12:32.536 "method": "nvmf_create_subsystem", 00:12:32.536 "req_id": 1 00:12:32.536 } 00:12:32.536 Got JSON-RPC error response 00:12:32.536 response: 00:12:32.536 { 00:12:32.536 "code": -32602, 00:12:32.536 "message": "Invalid cntlid range [6-5]" 00:12:32.536 }' 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:32.536 { 00:12:32.536 "nqn": "nqn.2016-06.io.spdk:cnode21683", 00:12:32.536 "min_cntlid": 6, 00:12:32.536 "max_cntlid": 5, 00:12:32.536 "method": "nvmf_create_subsystem", 00:12:32.536 "req_id": 1 00:12:32.536 } 00:12:32.536 Got JSON-RPC error response 00:12:32.536 response: 00:12:32.536 { 00:12:32.536 "code": -32602, 00:12:32.536 "message": "Invalid cntlid range [6-5]" 00:12:32.536 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:32.536 { 00:12:32.536 "name": "foobar", 00:12:32.536 "method": "nvmf_delete_target", 00:12:32.536 "req_id": 1 00:12:32.536 } 00:12:32.536 Got JSON-RPC error response 00:12:32.536 response: 00:12:32.536 { 00:12:32.536 "code": -32602, 00:12:32.536 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:32.536 }' 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:32.536 { 00:12:32.536 "name": "foobar", 00:12:32.536 "method": "nvmf_delete_target", 00:12:32.536 "req_id": 1 00:12:32.536 } 00:12:32.536 Got JSON-RPC error response 00:12:32.536 response: 00:12:32.536 { 00:12:32.536 "code": -32602, 00:12:32.536 "message": "The specified target doesn't exist, cannot delete it." 00:12:32.536 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:32.536 rmmod nvme_tcp 00:12:32.536 rmmod nvme_fabrics 00:12:32.536 rmmod nvme_keyring 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2939228 ']' 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2939228 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2939228 ']' 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2939228 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.536 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2939228 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2939228' 00:12:32.795 killing process with pid 2939228 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2939228 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2939228 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.795 17:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.698 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:34.958 00:12:34.958 real 0m11.088s 00:12:34.958 user 0m16.888s 00:12:34.958 sys 0m4.799s 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:34.958 ************************************ 00:12:34.958 END TEST nvmf_invalid 00:12:34.958 ************************************ 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:34.958 ************************************ 00:12:34.958 START TEST nvmf_connect_stress 00:12:34.958 ************************************ 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:34.958 * Looking for test storage... 00:12:34.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.958 --rc genhtml_branch_coverage=1 00:12:34.958 --rc genhtml_function_coverage=1 00:12:34.958 --rc genhtml_legend=1 00:12:34.958 --rc geninfo_all_blocks=1 00:12:34.958 --rc geninfo_unexecuted_blocks=1 00:12:34.958 00:12:34.958 ' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.958 --rc genhtml_branch_coverage=1 00:12:34.958 --rc genhtml_function_coverage=1 00:12:34.958 --rc genhtml_legend=1 00:12:34.958 --rc geninfo_all_blocks=1 00:12:34.958 --rc geninfo_unexecuted_blocks=1 00:12:34.958 00:12:34.958 ' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.958 --rc genhtml_branch_coverage=1 00:12:34.958 --rc genhtml_function_coverage=1 00:12:34.958 --rc genhtml_legend=1 00:12:34.958 --rc geninfo_all_blocks=1 00:12:34.958 --rc geninfo_unexecuted_blocks=1 00:12:34.958 00:12:34.958 ' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.958 --rc genhtml_branch_coverage=1 00:12:34.958 --rc genhtml_function_coverage=1 00:12:34.958 --rc genhtml_legend=1 00:12:34.958 --rc geninfo_all_blocks=1 00:12:34.958 --rc geninfo_unexecuted_blocks=1 00:12:34.958 00:12:34.958 ' 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:34.958 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.959 17:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.222 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:40.223 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:40.223 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:40.223 Found net devices under 0000:31:00.0: cvl_0_0 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:40.223 Found net devices under 0000:31:00.1: cvl_0_1 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.223 17:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.223 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.223 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.223 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.223 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:12:40.223 00:12:40.223 --- 10.0.0.2 ping statistics --- 00:12:40.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.223 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:12:40.223 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:12:40.483 00:12:40.483 --- 10.0.0.1 ping statistics --- 00:12:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.483 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2944420 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2944420 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2944420 ']' 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.483 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:40.483 [2024-12-06 17:49:28.120933] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:12:40.483 [2024-12-06 17:49:28.120985] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.483 [2024-12-06 17:49:28.194442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:40.483 [2024-12-06 17:49:28.225684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.483 [2024-12-06 17:49:28.225714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.483 [2024-12-06 17:49:28.225721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.483 [2024-12-06 17:49:28.225725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.483 [2024-12-06 17:49:28.225729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.483 [2024-12-06 17:49:28.226863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.483 [2024-12-06 17:49:28.227013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.483 [2024-12-06 17:49:28.227016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.421 [2024-12-06 17:49:28.932481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.421 [2024-12-06 17:49:28.948656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.421 NULL1 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2944767 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.421 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.422 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.681 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.681 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:41.681 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.681 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.681 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.941 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.941 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:41.941 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.941 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.941 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.200 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.201 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:42.201 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.201 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.201 17:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.771 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.771 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:42.771 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.771 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.771 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.030 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.030 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:43.030 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.030 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.030 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.290 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.290 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:43.290 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.290 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.290 17:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.551 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.551 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:43.551 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.551 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.551 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.810 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.810 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:43.810 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.810 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.810 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.378 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.378 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:44.378 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.378 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.378 17:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.637 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.637 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:44.637 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.637 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.637 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.896 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.896 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:44.896 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.896 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.896 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.154 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.154 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:45.154 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.154 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.154 17:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.413 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.413 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:45.413 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.413 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.413 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.981 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.981 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:45.981 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.981 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.981 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.239 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.239 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:46.240 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.240 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.240 17:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.497 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.497 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:46.497 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.497 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.497 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.756 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.756 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:46.756 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.756 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.756 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.016 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.016 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:47.016 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.016 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.016 17:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.584 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.584 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:47.584 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.584 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.584 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.843 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:47.843 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.843 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.843 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.103 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:48.103 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.103 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.103 17:49:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.363 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.363 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:48.363 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.363 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.363 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:48.623 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.623 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:48.623 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.623 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.623 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.192 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.192 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:49.192 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.192 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.192 17:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.451 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.451 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:49.451 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.452 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.452 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.710 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.710 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:49.710 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.710 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.710 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:49.968 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.968 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:49.968 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.968 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.968 17:49:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.226 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.226 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:50.226 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.226 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.226 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.792 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.792 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:50.792 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.792 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.792 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.051 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.051 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:51.051 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.051 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.051 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.310 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.310 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:51.310 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.310 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.310 17:49:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:51.310 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2944767 00:12:51.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2944767) - No such process 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2944767 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.569 rmmod nvme_tcp 00:12:51.569 rmmod nvme_fabrics 00:12:51.569 rmmod nvme_keyring 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2944420 ']' 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2944420 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2944420 ']' 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2944420 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.569 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2944420 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2944420' 00:12:51.829 killing process with pid 2944420 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2944420 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2944420 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.829 17:49:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.735 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:53.736 00:12:53.736 real 0m18.984s 00:12:53.736 user 0m41.663s 00:12:53.736 sys 0m7.463s 00:12:53.736 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.736 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:53.736 ************************************ 00:12:53.736 END TEST nvmf_connect_stress 00:12:53.736 ************************************ 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.995 ************************************ 00:12:53.995 START TEST nvmf_fused_ordering 00:12:53.995 ************************************ 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:53.995 * Looking for test storage... 00:12:53.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.995 --rc genhtml_branch_coverage=1 00:12:53.995 --rc genhtml_function_coverage=1 00:12:53.995 --rc genhtml_legend=1 00:12:53.995 --rc geninfo_all_blocks=1 00:12:53.995 --rc geninfo_unexecuted_blocks=1 00:12:53.995 00:12:53.995 ' 00:12:53.995 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:53.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.995 --rc genhtml_branch_coverage=1 00:12:53.995 --rc genhtml_function_coverage=1 00:12:53.996 --rc genhtml_legend=1 00:12:53.996 --rc geninfo_all_blocks=1 00:12:53.996 --rc geninfo_unexecuted_blocks=1 00:12:53.996 00:12:53.996 ' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:53.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.996 --rc genhtml_branch_coverage=1 00:12:53.996 --rc genhtml_function_coverage=1 00:12:53.996 --rc genhtml_legend=1 00:12:53.996 --rc geninfo_all_blocks=1 00:12:53.996 --rc geninfo_unexecuted_blocks=1 00:12:53.996 00:12:53.996 ' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:53.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.996 --rc genhtml_branch_coverage=1 00:12:53.996 --rc genhtml_function_coverage=1 00:12:53.996 --rc genhtml_legend=1 00:12:53.996 --rc geninfo_all_blocks=1 00:12:53.996 --rc geninfo_unexecuted_blocks=1 00:12:53.996 00:12:53.996 ' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.996 17:49:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.272 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:59.273 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:59.273 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:59.273 Found net devices under 0000:31:00.0: cvl_0_0 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:59.273 Found net devices under 0000:31:00.1: cvl_0_1 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.273 17:49:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.273 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.273 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.273 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.273 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:12:59.533 00:12:59.533 --- 10.0.0.2 ping statistics --- 00:12:59.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.533 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:12:59.533 00:12:59.533 --- 10.0.0.1 ping statistics --- 00:12:59.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.533 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.533 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2951461 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2951461 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2951461 ']' 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:59.534 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.534 [2024-12-06 17:49:47.208902] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:12:59.534 [2024-12-06 17:49:47.208962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.534 [2024-12-06 17:49:47.283326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.534 [2024-12-06 17:49:47.313681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.534 [2024-12-06 17:49:47.313710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.534 [2024-12-06 17:49:47.313716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.534 [2024-12-06 17:49:47.313724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.534 [2024-12-06 17:49:47.313728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.534 [2024-12-06 17:49:47.314200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.793 [2024-12-06 17:49:47.413337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.793 [2024-12-06 17:49:47.429518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.793 NULL1 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:59.793 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.794 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:59.794 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.794 17:49:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:59.794 [2024-12-06 17:49:47.471846] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:12:59.794 [2024-12-06 17:49:47.471875] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951480 ] 00:13:00.054 Attached to nqn.2016-06.io.spdk:cnode1 00:13:00.054 Namespace ID: 1 size: 1GB 00:13:00.054 fused_ordering(0) 00:13:00.054 fused_ordering(1) 00:13:00.054 fused_ordering(2) 00:13:00.054 fused_ordering(3) 00:13:00.054 fused_ordering(4) 00:13:00.054 fused_ordering(5) 00:13:00.054 fused_ordering(6) 00:13:00.054 fused_ordering(7) 00:13:00.054 fused_ordering(8) 00:13:00.054 fused_ordering(9) 00:13:00.054 fused_ordering(10) 00:13:00.054 fused_ordering(11) 00:13:00.054 fused_ordering(12) 00:13:00.054 fused_ordering(13) 00:13:00.054 fused_ordering(14) 00:13:00.054 fused_ordering(15) 00:13:00.054 fused_ordering(16) 00:13:00.054 fused_ordering(17) 00:13:00.054 fused_ordering(18) 00:13:00.054 fused_ordering(19) 00:13:00.054 fused_ordering(20) 00:13:00.054 fused_ordering(21) 00:13:00.054 fused_ordering(22) 00:13:00.054 fused_ordering(23) 00:13:00.054 fused_ordering(24) 00:13:00.054 fused_ordering(25) 00:13:00.054 fused_ordering(26) 00:13:00.054 fused_ordering(27) 00:13:00.054 fused_ordering(28) 00:13:00.054 fused_ordering(29) 00:13:00.054 fused_ordering(30) 00:13:00.054 fused_ordering(31) 00:13:00.054 fused_ordering(32) 00:13:00.054 fused_ordering(33) 00:13:00.054 fused_ordering(34) 00:13:00.054 fused_ordering(35) 00:13:00.054 fused_ordering(36) 00:13:00.054 fused_ordering(37) 00:13:00.054 fused_ordering(38) 00:13:00.054 fused_ordering(39) 00:13:00.054 fused_ordering(40) 00:13:00.054 fused_ordering(41) 00:13:00.054 fused_ordering(42) 00:13:00.054 fused_ordering(43) 00:13:00.054 fused_ordering(44) 00:13:00.054 fused_ordering(45) 00:13:00.054 fused_ordering(46) 00:13:00.054 fused_ordering(47) 00:13:00.054 fused_ordering(48) 00:13:00.054 fused_ordering(49) 00:13:00.055 fused_ordering(50) 00:13:00.055 fused_ordering(51) 00:13:00.055 fused_ordering(52) 00:13:00.055 fused_ordering(53) 00:13:00.055 fused_ordering(54) 00:13:00.055 fused_ordering(55) 00:13:00.055 fused_ordering(56) 00:13:00.055 fused_ordering(57) 00:13:00.055 fused_ordering(58) 00:13:00.055 fused_ordering(59) 00:13:00.055 fused_ordering(60) 00:13:00.055 fused_ordering(61) 00:13:00.055 fused_ordering(62) 00:13:00.055 fused_ordering(63) 00:13:00.055 fused_ordering(64) 00:13:00.055 fused_ordering(65) 00:13:00.055 fused_ordering(66) 00:13:00.055 fused_ordering(67) 00:13:00.055 fused_ordering(68) 00:13:00.055 fused_ordering(69) 00:13:00.055 fused_ordering(70) 00:13:00.055 fused_ordering(71) 00:13:00.055 fused_ordering(72) 00:13:00.055 fused_ordering(73) 00:13:00.055 fused_ordering(74) 00:13:00.055 fused_ordering(75) 00:13:00.055 fused_ordering(76) 00:13:00.055 fused_ordering(77) 00:13:00.055 fused_ordering(78) 00:13:00.055 fused_ordering(79) 00:13:00.055 fused_ordering(80) 00:13:00.055 fused_ordering(81) 00:13:00.055 fused_ordering(82) 00:13:00.055 fused_ordering(83) 00:13:00.055 fused_ordering(84) 00:13:00.055 fused_ordering(85) 00:13:00.055 fused_ordering(86) 00:13:00.055 fused_ordering(87) 00:13:00.055 fused_ordering(88) 00:13:00.055 fused_ordering(89) 00:13:00.055 fused_ordering(90) 00:13:00.055 fused_ordering(91) 00:13:00.055 fused_ordering(92) 00:13:00.055 fused_ordering(93) 00:13:00.055 fused_ordering(94) 00:13:00.055 fused_ordering(95) 00:13:00.055 fused_ordering(96) 00:13:00.055 fused_ordering(97) 00:13:00.055 fused_ordering(98) 00:13:00.055 fused_ordering(99) 00:13:00.055 fused_ordering(100) 00:13:00.055 fused_ordering(101) 00:13:00.055 fused_ordering(102) 00:13:00.055 fused_ordering(103) 00:13:00.055 fused_ordering(104) 00:13:00.055 fused_ordering(105) 00:13:00.055 fused_ordering(106) 00:13:00.055 fused_ordering(107) 00:13:00.055 fused_ordering(108) 00:13:00.055 fused_ordering(109) 00:13:00.055 fused_ordering(110) 00:13:00.055 fused_ordering(111) 00:13:00.055 fused_ordering(112) 00:13:00.055 fused_ordering(113) 00:13:00.055 fused_ordering(114) 00:13:00.055 fused_ordering(115) 00:13:00.055 fused_ordering(116) 00:13:00.055 fused_ordering(117) 00:13:00.055 fused_ordering(118) 00:13:00.055 fused_ordering(119) 00:13:00.055 fused_ordering(120) 00:13:00.055 fused_ordering(121) 00:13:00.055 fused_ordering(122) 00:13:00.055 fused_ordering(123) 00:13:00.055 fused_ordering(124) 00:13:00.055 fused_ordering(125) 00:13:00.055 fused_ordering(126) 00:13:00.055 fused_ordering(127) 00:13:00.055 fused_ordering(128) 00:13:00.055 fused_ordering(129) 00:13:00.055 fused_ordering(130) 00:13:00.055 fused_ordering(131) 00:13:00.055 fused_ordering(132) 00:13:00.055 fused_ordering(133) 00:13:00.055 fused_ordering(134) 00:13:00.055 fused_ordering(135) 00:13:00.055 fused_ordering(136) 00:13:00.055 fused_ordering(137) 00:13:00.055 fused_ordering(138) 00:13:00.055 fused_ordering(139) 00:13:00.055 fused_ordering(140) 00:13:00.055 fused_ordering(141) 00:13:00.055 fused_ordering(142) 00:13:00.055 fused_ordering(143) 00:13:00.055 fused_ordering(144) 00:13:00.055 fused_ordering(145) 00:13:00.055 fused_ordering(146) 00:13:00.055 fused_ordering(147) 00:13:00.055 fused_ordering(148) 00:13:00.055 fused_ordering(149) 00:13:00.055 fused_ordering(150) 00:13:00.055 fused_ordering(151) 00:13:00.055 fused_ordering(152) 00:13:00.055 fused_ordering(153) 00:13:00.055 fused_ordering(154) 00:13:00.055 fused_ordering(155) 00:13:00.055 fused_ordering(156) 00:13:00.055 fused_ordering(157) 00:13:00.055 fused_ordering(158) 00:13:00.055 fused_ordering(159) 00:13:00.055 fused_ordering(160) 00:13:00.055 fused_ordering(161) 00:13:00.055 fused_ordering(162) 00:13:00.055 fused_ordering(163) 00:13:00.055 fused_ordering(164) 00:13:00.055 fused_ordering(165) 00:13:00.055 fused_ordering(166) 00:13:00.055 fused_ordering(167) 00:13:00.055 fused_ordering(168) 00:13:00.055 fused_ordering(169) 00:13:00.055 fused_ordering(170) 00:13:00.055 fused_ordering(171) 00:13:00.055 fused_ordering(172) 00:13:00.055 fused_ordering(173) 00:13:00.055 fused_ordering(174) 00:13:00.055 fused_ordering(175) 00:13:00.055 fused_ordering(176) 00:13:00.055 fused_ordering(177) 00:13:00.055 fused_ordering(178) 00:13:00.055 fused_ordering(179) 00:13:00.055 fused_ordering(180) 00:13:00.055 fused_ordering(181) 00:13:00.055 fused_ordering(182) 00:13:00.055 fused_ordering(183) 00:13:00.055 fused_ordering(184) 00:13:00.055 fused_ordering(185) 00:13:00.055 fused_ordering(186) 00:13:00.055 fused_ordering(187) 00:13:00.055 fused_ordering(188) 00:13:00.055 fused_ordering(189) 00:13:00.055 fused_ordering(190) 00:13:00.055 fused_ordering(191) 00:13:00.055 fused_ordering(192) 00:13:00.055 fused_ordering(193) 00:13:00.055 fused_ordering(194) 00:13:00.055 fused_ordering(195) 00:13:00.055 fused_ordering(196) 00:13:00.055 fused_ordering(197) 00:13:00.055 fused_ordering(198) 00:13:00.055 fused_ordering(199) 00:13:00.055 fused_ordering(200) 00:13:00.055 fused_ordering(201) 00:13:00.055 fused_ordering(202) 00:13:00.055 fused_ordering(203) 00:13:00.055 fused_ordering(204) 00:13:00.055 fused_ordering(205) 00:13:00.316 fused_ordering(206) 00:13:00.316 fused_ordering(207) 00:13:00.316 fused_ordering(208) 00:13:00.316 fused_ordering(209) 00:13:00.316 fused_ordering(210) 00:13:00.316 fused_ordering(211) 00:13:00.316 fused_ordering(212) 00:13:00.316 fused_ordering(213) 00:13:00.316 fused_ordering(214) 00:13:00.316 fused_ordering(215) 00:13:00.316 fused_ordering(216) 00:13:00.316 fused_ordering(217) 00:13:00.316 fused_ordering(218) 00:13:00.316 fused_ordering(219) 00:13:00.316 fused_ordering(220) 00:13:00.316 fused_ordering(221) 00:13:00.316 fused_ordering(222) 00:13:00.316 fused_ordering(223) 00:13:00.316 fused_ordering(224) 00:13:00.316 fused_ordering(225) 00:13:00.316 fused_ordering(226) 00:13:00.316 fused_ordering(227) 00:13:00.316 fused_ordering(228) 00:13:00.316 fused_ordering(229) 00:13:00.316 fused_ordering(230) 00:13:00.316 fused_ordering(231) 00:13:00.316 fused_ordering(232) 00:13:00.316 fused_ordering(233) 00:13:00.316 fused_ordering(234) 00:13:00.316 fused_ordering(235) 00:13:00.316 fused_ordering(236) 00:13:00.316 fused_ordering(237) 00:13:00.316 fused_ordering(238) 00:13:00.316 fused_ordering(239) 00:13:00.316 fused_ordering(240) 00:13:00.316 fused_ordering(241) 00:13:00.316 fused_ordering(242) 00:13:00.316 fused_ordering(243) 00:13:00.316 fused_ordering(244) 00:13:00.316 fused_ordering(245) 00:13:00.316 fused_ordering(246) 00:13:00.316 fused_ordering(247) 00:13:00.316 fused_ordering(248) 00:13:00.316 fused_ordering(249) 00:13:00.316 fused_ordering(250) 00:13:00.316 fused_ordering(251) 00:13:00.316 fused_ordering(252) 00:13:00.316 fused_ordering(253) 00:13:00.316 fused_ordering(254) 00:13:00.316 fused_ordering(255) 00:13:00.316 fused_ordering(256) 00:13:00.316 fused_ordering(257) 00:13:00.316 fused_ordering(258) 00:13:00.316 fused_ordering(259) 00:13:00.316 fused_ordering(260) 00:13:00.316 fused_ordering(261) 00:13:00.316 fused_ordering(262) 00:13:00.316 fused_ordering(263) 00:13:00.316 fused_ordering(264) 00:13:00.316 fused_ordering(265) 00:13:00.316 fused_ordering(266) 00:13:00.316 fused_ordering(267) 00:13:00.317 fused_ordering(268) 00:13:00.317 fused_ordering(269) 00:13:00.317 fused_ordering(270) 00:13:00.317 fused_ordering(271) 00:13:00.317 fused_ordering(272) 00:13:00.317 fused_ordering(273) 00:13:00.317 fused_ordering(274) 00:13:00.317 fused_ordering(275) 00:13:00.317 fused_ordering(276) 00:13:00.317 fused_ordering(277) 00:13:00.317 fused_ordering(278) 00:13:00.317 fused_ordering(279) 00:13:00.317 fused_ordering(280) 00:13:00.317 fused_ordering(281) 00:13:00.317 fused_ordering(282) 00:13:00.317 fused_ordering(283) 00:13:00.317 fused_ordering(284) 00:13:00.317 fused_ordering(285) 00:13:00.317 fused_ordering(286) 00:13:00.317 fused_ordering(287) 00:13:00.317 fused_ordering(288) 00:13:00.317 fused_ordering(289) 00:13:00.317 fused_ordering(290) 00:13:00.317 fused_ordering(291) 00:13:00.317 fused_ordering(292) 00:13:00.317 fused_ordering(293) 00:13:00.317 fused_ordering(294) 00:13:00.317 fused_ordering(295) 00:13:00.317 fused_ordering(296) 00:13:00.317 fused_ordering(297) 00:13:00.317 fused_ordering(298) 00:13:00.317 fused_ordering(299) 00:13:00.317 fused_ordering(300) 00:13:00.317 fused_ordering(301) 00:13:00.317 fused_ordering(302) 00:13:00.317 fused_ordering(303) 00:13:00.317 fused_ordering(304) 00:13:00.317 fused_ordering(305) 00:13:00.317 fused_ordering(306) 00:13:00.317 fused_ordering(307) 00:13:00.317 fused_ordering(308) 00:13:00.317 fused_ordering(309) 00:13:00.317 fused_ordering(310) 00:13:00.317 fused_ordering(311) 00:13:00.317 fused_ordering(312) 00:13:00.317 fused_ordering(313) 00:13:00.317 fused_ordering(314) 00:13:00.317 fused_ordering(315) 00:13:00.317 fused_ordering(316) 00:13:00.317 fused_ordering(317) 00:13:00.317 fused_ordering(318) 00:13:00.317 fused_ordering(319) 00:13:00.317 fused_ordering(320) 00:13:00.317 fused_ordering(321) 00:13:00.317 fused_ordering(322) 00:13:00.317 fused_ordering(323) 00:13:00.317 fused_ordering(324) 00:13:00.317 fused_ordering(325) 00:13:00.317 fused_ordering(326) 00:13:00.317 fused_ordering(327) 00:13:00.317 fused_ordering(328) 00:13:00.317 fused_ordering(329) 00:13:00.317 fused_ordering(330) 00:13:00.317 fused_ordering(331) 00:13:00.317 fused_ordering(332) 00:13:00.317 fused_ordering(333) 00:13:00.317 fused_ordering(334) 00:13:00.317 fused_ordering(335) 00:13:00.317 fused_ordering(336) 00:13:00.317 fused_ordering(337) 00:13:00.317 fused_ordering(338) 00:13:00.317 fused_ordering(339) 00:13:00.317 fused_ordering(340) 00:13:00.317 fused_ordering(341) 00:13:00.317 fused_ordering(342) 00:13:00.317 fused_ordering(343) 00:13:00.317 fused_ordering(344) 00:13:00.317 fused_ordering(345) 00:13:00.317 fused_ordering(346) 00:13:00.317 fused_ordering(347) 00:13:00.317 fused_ordering(348) 00:13:00.317 fused_ordering(349) 00:13:00.317 fused_ordering(350) 00:13:00.317 fused_ordering(351) 00:13:00.317 fused_ordering(352) 00:13:00.317 fused_ordering(353) 00:13:00.317 fused_ordering(354) 00:13:00.317 fused_ordering(355) 00:13:00.317 fused_ordering(356) 00:13:00.317 fused_ordering(357) 00:13:00.317 fused_ordering(358) 00:13:00.317 fused_ordering(359) 00:13:00.317 fused_ordering(360) 00:13:00.317 fused_ordering(361) 00:13:00.317 fused_ordering(362) 00:13:00.317 fused_ordering(363) 00:13:00.317 fused_ordering(364) 00:13:00.317 fused_ordering(365) 00:13:00.317 fused_ordering(366) 00:13:00.317 fused_ordering(367) 00:13:00.317 fused_ordering(368) 00:13:00.317 fused_ordering(369) 00:13:00.317 fused_ordering(370) 00:13:00.317 fused_ordering(371) 00:13:00.317 fused_ordering(372) 00:13:00.317 fused_ordering(373) 00:13:00.317 fused_ordering(374) 00:13:00.317 fused_ordering(375) 00:13:00.317 fused_ordering(376) 00:13:00.317 fused_ordering(377) 00:13:00.317 fused_ordering(378) 00:13:00.317 fused_ordering(379) 00:13:00.317 fused_ordering(380) 00:13:00.317 fused_ordering(381) 00:13:00.317 fused_ordering(382) 00:13:00.317 fused_ordering(383) 00:13:00.317 fused_ordering(384) 00:13:00.317 fused_ordering(385) 00:13:00.317 fused_ordering(386) 00:13:00.317 fused_ordering(387) 00:13:00.317 fused_ordering(388) 00:13:00.317 fused_ordering(389) 00:13:00.317 fused_ordering(390) 00:13:00.317 fused_ordering(391) 00:13:00.317 fused_ordering(392) 00:13:00.317 fused_ordering(393) 00:13:00.317 fused_ordering(394) 00:13:00.317 fused_ordering(395) 00:13:00.317 fused_ordering(396) 00:13:00.317 fused_ordering(397) 00:13:00.317 fused_ordering(398) 00:13:00.317 fused_ordering(399) 00:13:00.317 fused_ordering(400) 00:13:00.317 fused_ordering(401) 00:13:00.317 fused_ordering(402) 00:13:00.317 fused_ordering(403) 00:13:00.317 fused_ordering(404) 00:13:00.317 fused_ordering(405) 00:13:00.317 fused_ordering(406) 00:13:00.317 fused_ordering(407) 00:13:00.317 fused_ordering(408) 00:13:00.317 fused_ordering(409) 00:13:00.317 fused_ordering(410) 00:13:00.887 fused_ordering(411) 00:13:00.887 fused_ordering(412) 00:13:00.887 fused_ordering(413) 00:13:00.887 fused_ordering(414) 00:13:00.887 fused_ordering(415) 00:13:00.887 fused_ordering(416) 00:13:00.887 fused_ordering(417) 00:13:00.887 fused_ordering(418) 00:13:00.887 fused_ordering(419) 00:13:00.887 fused_ordering(420) 00:13:00.887 fused_ordering(421) 00:13:00.887 fused_ordering(422) 00:13:00.887 fused_ordering(423) 00:13:00.887 fused_ordering(424) 00:13:00.887 fused_ordering(425) 00:13:00.887 fused_ordering(426) 00:13:00.887 fused_ordering(427) 00:13:00.887 fused_ordering(428) 00:13:00.887 fused_ordering(429) 00:13:00.887 fused_ordering(430) 00:13:00.887 fused_ordering(431) 00:13:00.888 fused_ordering(432) 00:13:00.888 fused_ordering(433) 00:13:00.888 fused_ordering(434) 00:13:00.888 fused_ordering(435) 00:13:00.888 fused_ordering(436) 00:13:00.888 fused_ordering(437) 00:13:00.888 fused_ordering(438) 00:13:00.888 fused_ordering(439) 00:13:00.888 fused_ordering(440) 00:13:00.888 fused_ordering(441) 00:13:00.888 fused_ordering(442) 00:13:00.888 fused_ordering(443) 00:13:00.888 fused_ordering(444) 00:13:00.888 fused_ordering(445) 00:13:00.888 fused_ordering(446) 00:13:00.888 fused_ordering(447) 00:13:00.888 fused_ordering(448) 00:13:00.888 fused_ordering(449) 00:13:00.888 fused_ordering(450) 00:13:00.888 fused_ordering(451) 00:13:00.888 fused_ordering(452) 00:13:00.888 fused_ordering(453) 00:13:00.888 fused_ordering(454) 00:13:00.888 fused_ordering(455) 00:13:00.888 fused_ordering(456) 00:13:00.888 fused_ordering(457) 00:13:00.888 fused_ordering(458) 00:13:00.888 fused_ordering(459) 00:13:00.888 fused_ordering(460) 00:13:00.888 fused_ordering(461) 00:13:00.888 fused_ordering(462) 00:13:00.888 fused_ordering(463) 00:13:00.888 fused_ordering(464) 00:13:00.888 fused_ordering(465) 00:13:00.888 fused_ordering(466) 00:13:00.888 fused_ordering(467) 00:13:00.888 fused_ordering(468) 00:13:00.888 fused_ordering(469) 00:13:00.888 fused_ordering(470) 00:13:00.888 fused_ordering(471) 00:13:00.888 fused_ordering(472) 00:13:00.888 fused_ordering(473) 00:13:00.888 fused_ordering(474) 00:13:00.888 fused_ordering(475) 00:13:00.888 fused_ordering(476) 00:13:00.888 fused_ordering(477) 00:13:00.888 fused_ordering(478) 00:13:00.888 fused_ordering(479) 00:13:00.888 fused_ordering(480) 00:13:00.888 fused_ordering(481) 00:13:00.888 fused_ordering(482) 00:13:00.888 fused_ordering(483) 00:13:00.888 fused_ordering(484) 00:13:00.888 fused_ordering(485) 00:13:00.888 fused_ordering(486) 00:13:00.888 fused_ordering(487) 00:13:00.888 fused_ordering(488) 00:13:00.888 fused_ordering(489) 00:13:00.888 fused_ordering(490) 00:13:00.888 fused_ordering(491) 00:13:00.888 fused_ordering(492) 00:13:00.888 fused_ordering(493) 00:13:00.888 fused_ordering(494) 00:13:00.888 fused_ordering(495) 00:13:00.888 fused_ordering(496) 00:13:00.888 fused_ordering(497) 00:13:00.888 fused_ordering(498) 00:13:00.888 fused_ordering(499) 00:13:00.888 fused_ordering(500) 00:13:00.888 fused_ordering(501) 00:13:00.888 fused_ordering(502) 00:13:00.888 fused_ordering(503) 00:13:00.888 fused_ordering(504) 00:13:00.888 fused_ordering(505) 00:13:00.888 fused_ordering(506) 00:13:00.888 fused_ordering(507) 00:13:00.888 fused_ordering(508) 00:13:00.888 fused_ordering(509) 00:13:00.888 fused_ordering(510) 00:13:00.888 fused_ordering(511) 00:13:00.888 fused_ordering(512) 00:13:00.888 fused_ordering(513) 00:13:00.888 fused_ordering(514) 00:13:00.888 fused_ordering(515) 00:13:00.888 fused_ordering(516) 00:13:00.888 fused_ordering(517) 00:13:00.888 fused_ordering(518) 00:13:00.888 fused_ordering(519) 00:13:00.888 fused_ordering(520) 00:13:00.888 fused_ordering(521) 00:13:00.888 fused_ordering(522) 00:13:00.888 fused_ordering(523) 00:13:00.888 fused_ordering(524) 00:13:00.888 fused_ordering(525) 00:13:00.888 fused_ordering(526) 00:13:00.888 fused_ordering(527) 00:13:00.888 fused_ordering(528) 00:13:00.888 fused_ordering(529) 00:13:00.888 fused_ordering(530) 00:13:00.888 fused_ordering(531) 00:13:00.888 fused_ordering(532) 00:13:00.888 fused_ordering(533) 00:13:00.888 fused_ordering(534) 00:13:00.888 fused_ordering(535) 00:13:00.888 fused_ordering(536) 00:13:00.888 fused_ordering(537) 00:13:00.888 fused_ordering(538) 00:13:00.888 fused_ordering(539) 00:13:00.888 fused_ordering(540) 00:13:00.888 fused_ordering(541) 00:13:00.888 fused_ordering(542) 00:13:00.888 fused_ordering(543) 00:13:00.888 fused_ordering(544) 00:13:00.888 fused_ordering(545) 00:13:00.888 fused_ordering(546) 00:13:00.888 fused_ordering(547) 00:13:00.888 fused_ordering(548) 00:13:00.888 fused_ordering(549) 00:13:00.888 fused_ordering(550) 00:13:00.888 fused_ordering(551) 00:13:00.888 fused_ordering(552) 00:13:00.888 fused_ordering(553) 00:13:00.888 fused_ordering(554) 00:13:00.888 fused_ordering(555) 00:13:00.888 fused_ordering(556) 00:13:00.888 fused_ordering(557) 00:13:00.888 fused_ordering(558) 00:13:00.888 fused_ordering(559) 00:13:00.888 fused_ordering(560) 00:13:00.888 fused_ordering(561) 00:13:00.888 fused_ordering(562) 00:13:00.888 fused_ordering(563) 00:13:00.888 fused_ordering(564) 00:13:00.888 fused_ordering(565) 00:13:00.888 fused_ordering(566) 00:13:00.888 fused_ordering(567) 00:13:00.888 fused_ordering(568) 00:13:00.888 fused_ordering(569) 00:13:00.888 fused_ordering(570) 00:13:00.888 fused_ordering(571) 00:13:00.888 fused_ordering(572) 00:13:00.888 fused_ordering(573) 00:13:00.888 fused_ordering(574) 00:13:00.888 fused_ordering(575) 00:13:00.888 fused_ordering(576) 00:13:00.888 fused_ordering(577) 00:13:00.888 fused_ordering(578) 00:13:00.888 fused_ordering(579) 00:13:00.888 fused_ordering(580) 00:13:00.888 fused_ordering(581) 00:13:00.888 fused_ordering(582) 00:13:00.888 fused_ordering(583) 00:13:00.888 fused_ordering(584) 00:13:00.888 fused_ordering(585) 00:13:00.888 fused_ordering(586) 00:13:00.888 fused_ordering(587) 00:13:00.888 fused_ordering(588) 00:13:00.888 fused_ordering(589) 00:13:00.888 fused_ordering(590) 00:13:00.888 fused_ordering(591) 00:13:00.888 fused_ordering(592) 00:13:00.888 fused_ordering(593) 00:13:00.888 fused_ordering(594) 00:13:00.888 fused_ordering(595) 00:13:00.888 fused_ordering(596) 00:13:00.888 fused_ordering(597) 00:13:00.888 fused_ordering(598) 00:13:00.888 fused_ordering(599) 00:13:00.888 fused_ordering(600) 00:13:00.888 fused_ordering(601) 00:13:00.888 fused_ordering(602) 00:13:00.888 fused_ordering(603) 00:13:00.888 fused_ordering(604) 00:13:00.888 fused_ordering(605) 00:13:00.888 fused_ordering(606) 00:13:00.888 fused_ordering(607) 00:13:00.888 fused_ordering(608) 00:13:00.888 fused_ordering(609) 00:13:00.888 fused_ordering(610) 00:13:00.888 fused_ordering(611) 00:13:00.888 fused_ordering(612) 00:13:00.888 fused_ordering(613) 00:13:00.888 fused_ordering(614) 00:13:00.888 fused_ordering(615) 00:13:01.148 fused_ordering(616) 00:13:01.148 fused_ordering(617) 00:13:01.148 fused_ordering(618) 00:13:01.148 fused_ordering(619) 00:13:01.148 fused_ordering(620) 00:13:01.148 fused_ordering(621) 00:13:01.148 fused_ordering(622) 00:13:01.148 fused_ordering(623) 00:13:01.148 fused_ordering(624) 00:13:01.148 fused_ordering(625) 00:13:01.148 fused_ordering(626) 00:13:01.148 fused_ordering(627) 00:13:01.148 fused_ordering(628) 00:13:01.148 fused_ordering(629) 00:13:01.148 fused_ordering(630) 00:13:01.148 fused_ordering(631) 00:13:01.148 fused_ordering(632) 00:13:01.148 fused_ordering(633) 00:13:01.148 fused_ordering(634) 00:13:01.148 fused_ordering(635) 00:13:01.148 fused_ordering(636) 00:13:01.148 fused_ordering(637) 00:13:01.148 fused_ordering(638) 00:13:01.148 fused_ordering(639) 00:13:01.148 fused_ordering(640) 00:13:01.148 fused_ordering(641) 00:13:01.148 fused_ordering(642) 00:13:01.148 fused_ordering(643) 00:13:01.148 fused_ordering(644) 00:13:01.148 fused_ordering(645) 00:13:01.148 fused_ordering(646) 00:13:01.148 fused_ordering(647) 00:13:01.148 fused_ordering(648) 00:13:01.148 fused_ordering(649) 00:13:01.148 fused_ordering(650) 00:13:01.148 fused_ordering(651) 00:13:01.148 fused_ordering(652) 00:13:01.148 fused_ordering(653) 00:13:01.148 fused_ordering(654) 00:13:01.148 fused_ordering(655) 00:13:01.148 fused_ordering(656) 00:13:01.149 fused_ordering(657) 00:13:01.149 fused_ordering(658) 00:13:01.149 fused_ordering(659) 00:13:01.149 fused_ordering(660) 00:13:01.149 fused_ordering(661) 00:13:01.149 fused_ordering(662) 00:13:01.149 fused_ordering(663) 00:13:01.149 fused_ordering(664) 00:13:01.149 fused_ordering(665) 00:13:01.149 fused_ordering(666) 00:13:01.149 fused_ordering(667) 00:13:01.149 fused_ordering(668) 00:13:01.149 fused_ordering(669) 00:13:01.149 fused_ordering(670) 00:13:01.149 fused_ordering(671) 00:13:01.149 fused_ordering(672) 00:13:01.149 fused_ordering(673) 00:13:01.149 fused_ordering(674) 00:13:01.149 fused_ordering(675) 00:13:01.149 fused_ordering(676) 00:13:01.149 fused_ordering(677) 00:13:01.149 fused_ordering(678) 00:13:01.149 fused_ordering(679) 00:13:01.149 fused_ordering(680) 00:13:01.149 fused_ordering(681) 00:13:01.149 fused_ordering(682) 00:13:01.149 fused_ordering(683) 00:13:01.149 fused_ordering(684) 00:13:01.149 fused_ordering(685) 00:13:01.149 fused_ordering(686) 00:13:01.149 fused_ordering(687) 00:13:01.149 fused_ordering(688) 00:13:01.149 fused_ordering(689) 00:13:01.149 fused_ordering(690) 00:13:01.149 fused_ordering(691) 00:13:01.149 fused_ordering(692) 00:13:01.149 fused_ordering(693) 00:13:01.149 fused_ordering(694) 00:13:01.149 fused_ordering(695) 00:13:01.149 fused_ordering(696) 00:13:01.149 fused_ordering(697) 00:13:01.149 fused_ordering(698) 00:13:01.149 fused_ordering(699) 00:13:01.149 fused_ordering(700) 00:13:01.149 fused_ordering(701) 00:13:01.149 fused_ordering(702) 00:13:01.149 fused_ordering(703) 00:13:01.149 fused_ordering(704) 00:13:01.149 fused_ordering(705) 00:13:01.149 fused_ordering(706) 00:13:01.149 fused_ordering(707) 00:13:01.149 fused_ordering(708) 00:13:01.149 fused_ordering(709) 00:13:01.149 fused_ordering(710) 00:13:01.149 fused_ordering(711) 00:13:01.149 fused_ordering(712) 00:13:01.149 fused_ordering(713) 00:13:01.149 fused_ordering(714) 00:13:01.149 fused_ordering(715) 00:13:01.149 fused_ordering(716) 00:13:01.149 fused_ordering(717) 00:13:01.149 fused_ordering(718) 00:13:01.149 fused_ordering(719) 00:13:01.149 fused_ordering(720) 00:13:01.149 fused_ordering(721) 00:13:01.149 fused_ordering(722) 00:13:01.149 fused_ordering(723) 00:13:01.149 fused_ordering(724) 00:13:01.149 fused_ordering(725) 00:13:01.149 fused_ordering(726) 00:13:01.149 fused_ordering(727) 00:13:01.149 fused_ordering(728) 00:13:01.149 fused_ordering(729) 00:13:01.149 fused_ordering(730) 00:13:01.149 fused_ordering(731) 00:13:01.149 fused_ordering(732) 00:13:01.149 fused_ordering(733) 00:13:01.149 fused_ordering(734) 00:13:01.149 fused_ordering(735) 00:13:01.149 fused_ordering(736) 00:13:01.149 fused_ordering(737) 00:13:01.149 fused_ordering(738) 00:13:01.149 fused_ordering(739) 00:13:01.149 fused_ordering(740) 00:13:01.149 fused_ordering(741) 00:13:01.149 fused_ordering(742) 00:13:01.149 fused_ordering(743) 00:13:01.149 fused_ordering(744) 00:13:01.149 fused_ordering(745) 00:13:01.149 fused_ordering(746) 00:13:01.149 fused_ordering(747) 00:13:01.149 fused_ordering(748) 00:13:01.149 fused_ordering(749) 00:13:01.149 fused_ordering(750) 00:13:01.149 fused_ordering(751) 00:13:01.149 fused_ordering(752) 00:13:01.149 fused_ordering(753) 00:13:01.149 fused_ordering(754) 00:13:01.149 fused_ordering(755) 00:13:01.149 fused_ordering(756) 00:13:01.149 fused_ordering(757) 00:13:01.149 fused_ordering(758) 00:13:01.149 fused_ordering(759) 00:13:01.149 fused_ordering(760) 00:13:01.149 fused_ordering(761) 00:13:01.149 fused_ordering(762) 00:13:01.149 fused_ordering(763) 00:13:01.149 fused_ordering(764) 00:13:01.149 fused_ordering(765) 00:13:01.149 fused_ordering(766) 00:13:01.149 fused_ordering(767) 00:13:01.149 fused_ordering(768) 00:13:01.149 fused_ordering(769) 00:13:01.149 fused_ordering(770) 00:13:01.149 fused_ordering(771) 00:13:01.149 fused_ordering(772) 00:13:01.149 fused_ordering(773) 00:13:01.149 fused_ordering(774) 00:13:01.149 fused_ordering(775) 00:13:01.149 fused_ordering(776) 00:13:01.149 fused_ordering(777) 00:13:01.149 fused_ordering(778) 00:13:01.149 fused_ordering(779) 00:13:01.149 fused_ordering(780) 00:13:01.149 fused_ordering(781) 00:13:01.149 fused_ordering(782) 00:13:01.149 fused_ordering(783) 00:13:01.149 fused_ordering(784) 00:13:01.149 fused_ordering(785) 00:13:01.149 fused_ordering(786) 00:13:01.149 fused_ordering(787) 00:13:01.149 fused_ordering(788) 00:13:01.149 fused_ordering(789) 00:13:01.149 fused_ordering(790) 00:13:01.149 fused_ordering(791) 00:13:01.149 fused_ordering(792) 00:13:01.149 fused_ordering(793) 00:13:01.149 fused_ordering(794) 00:13:01.149 fused_ordering(795) 00:13:01.149 fused_ordering(796) 00:13:01.149 fused_ordering(797) 00:13:01.149 fused_ordering(798) 00:13:01.149 fused_ordering(799) 00:13:01.149 fused_ordering(800) 00:13:01.149 fused_ordering(801) 00:13:01.149 fused_ordering(802) 00:13:01.149 fused_ordering(803) 00:13:01.149 fused_ordering(804) 00:13:01.149 fused_ordering(805) 00:13:01.149 fused_ordering(806) 00:13:01.149 fused_ordering(807) 00:13:01.149 fused_ordering(808) 00:13:01.149 fused_ordering(809) 00:13:01.149 fused_ordering(810) 00:13:01.149 fused_ordering(811) 00:13:01.149 fused_ordering(812) 00:13:01.149 fused_ordering(813) 00:13:01.149 fused_ordering(814) 00:13:01.149 fused_ordering(815) 00:13:01.149 fused_ordering(816) 00:13:01.149 fused_ordering(817) 00:13:01.149 fused_ordering(818) 00:13:01.149 fused_ordering(819) 00:13:01.149 fused_ordering(820) 00:13:01.719 fused_ordering(821) 00:13:01.719 fused_ordering(822) 00:13:01.719 fused_ordering(823) 00:13:01.719 fused_ordering(824) 00:13:01.719 fused_ordering(825) 00:13:01.719 fused_ordering(826) 00:13:01.719 fused_ordering(827) 00:13:01.719 fused_ordering(828) 00:13:01.719 fused_ordering(829) 00:13:01.719 fused_ordering(830) 00:13:01.719 fused_ordering(831) 00:13:01.719 fused_ordering(832) 00:13:01.719 fused_ordering(833) 00:13:01.719 fused_ordering(834) 00:13:01.719 fused_ordering(835) 00:13:01.719 fused_ordering(836) 00:13:01.719 fused_ordering(837) 00:13:01.719 fused_ordering(838) 00:13:01.719 fused_ordering(839) 00:13:01.719 fused_ordering(840) 00:13:01.719 fused_ordering(841) 00:13:01.719 fused_ordering(842) 00:13:01.719 fused_ordering(843) 00:13:01.719 fused_ordering(844) 00:13:01.719 fused_ordering(845) 00:13:01.719 fused_ordering(846) 00:13:01.719 fused_ordering(847) 00:13:01.719 fused_ordering(848) 00:13:01.719 fused_ordering(849) 00:13:01.719 fused_ordering(850) 00:13:01.719 fused_ordering(851) 00:13:01.719 fused_ordering(852) 00:13:01.719 fused_ordering(853) 00:13:01.719 fused_ordering(854) 00:13:01.719 fused_ordering(855) 00:13:01.719 fused_ordering(856) 00:13:01.719 fused_ordering(857) 00:13:01.719 fused_ordering(858) 00:13:01.719 fused_ordering(859) 00:13:01.719 fused_ordering(860) 00:13:01.719 fused_ordering(861) 00:13:01.719 fused_ordering(862) 00:13:01.719 fused_ordering(863) 00:13:01.719 fused_ordering(864) 00:13:01.719 fused_ordering(865) 00:13:01.719 fused_ordering(866) 00:13:01.719 fused_ordering(867) 00:13:01.719 fused_ordering(868) 00:13:01.719 fused_ordering(869) 00:13:01.719 fused_ordering(870) 00:13:01.719 fused_ordering(871) 00:13:01.719 fused_ordering(872) 00:13:01.719 fused_ordering(873) 00:13:01.719 fused_ordering(874) 00:13:01.719 fused_ordering(875) 00:13:01.719 fused_ordering(876) 00:13:01.719 fused_ordering(877) 00:13:01.719 fused_ordering(878) 00:13:01.719 fused_ordering(879) 00:13:01.719 fused_ordering(880) 00:13:01.719 fused_ordering(881) 00:13:01.719 fused_ordering(882) 00:13:01.719 fused_ordering(883) 00:13:01.719 fused_ordering(884) 00:13:01.719 fused_ordering(885) 00:13:01.719 fused_ordering(886) 00:13:01.719 fused_ordering(887) 00:13:01.719 fused_ordering(888) 00:13:01.719 fused_ordering(889) 00:13:01.719 fused_ordering(890) 00:13:01.719 fused_ordering(891) 00:13:01.719 fused_ordering(892) 00:13:01.719 fused_ordering(893) 00:13:01.719 fused_ordering(894) 00:13:01.719 fused_ordering(895) 00:13:01.719 fused_ordering(896) 00:13:01.719 fused_ordering(897) 00:13:01.719 fused_ordering(898) 00:13:01.719 fused_ordering(899) 00:13:01.719 fused_ordering(900) 00:13:01.719 fused_ordering(901) 00:13:01.719 fused_ordering(902) 00:13:01.719 fused_ordering(903) 00:13:01.719 fused_ordering(904) 00:13:01.719 fused_ordering(905) 00:13:01.719 fused_ordering(906) 00:13:01.719 fused_ordering(907) 00:13:01.719 fused_ordering(908) 00:13:01.719 fused_ordering(909) 00:13:01.719 fused_ordering(910) 00:13:01.719 fused_ordering(911) 00:13:01.719 fused_ordering(912) 00:13:01.719 fused_ordering(913) 00:13:01.719 fused_ordering(914) 00:13:01.719 fused_ordering(915) 00:13:01.719 fused_ordering(916) 00:13:01.719 fused_ordering(917) 00:13:01.719 fused_ordering(918) 00:13:01.719 fused_ordering(919) 00:13:01.719 fused_ordering(920) 00:13:01.719 fused_ordering(921) 00:13:01.719 fused_ordering(922) 00:13:01.719 fused_ordering(923) 00:13:01.719 fused_ordering(924) 00:13:01.719 fused_ordering(925) 00:13:01.719 fused_ordering(926) 00:13:01.719 fused_ordering(927) 00:13:01.719 fused_ordering(928) 00:13:01.719 fused_ordering(929) 00:13:01.719 fused_ordering(930) 00:13:01.719 fused_ordering(931) 00:13:01.719 fused_ordering(932) 00:13:01.719 fused_ordering(933) 00:13:01.719 fused_ordering(934) 00:13:01.719 fused_ordering(935) 00:13:01.719 fused_ordering(936) 00:13:01.719 fused_ordering(937) 00:13:01.719 fused_ordering(938) 00:13:01.719 fused_ordering(939) 00:13:01.719 fused_ordering(940) 00:13:01.719 fused_ordering(941) 00:13:01.719 fused_ordering(942) 00:13:01.719 fused_ordering(943) 00:13:01.719 fused_ordering(944) 00:13:01.719 fused_ordering(945) 00:13:01.719 fused_ordering(946) 00:13:01.719 fused_ordering(947) 00:13:01.719 fused_ordering(948) 00:13:01.719 fused_ordering(949) 00:13:01.719 fused_ordering(950) 00:13:01.719 fused_ordering(951) 00:13:01.719 fused_ordering(952) 00:13:01.719 fused_ordering(953) 00:13:01.719 fused_ordering(954) 00:13:01.719 fused_ordering(955) 00:13:01.719 fused_ordering(956) 00:13:01.719 fused_ordering(957) 00:13:01.719 fused_ordering(958) 00:13:01.719 fused_ordering(959) 00:13:01.719 fused_ordering(960) 00:13:01.719 fused_ordering(961) 00:13:01.719 fused_ordering(962) 00:13:01.719 fused_ordering(963) 00:13:01.719 fused_ordering(964) 00:13:01.719 fused_ordering(965) 00:13:01.719 fused_ordering(966) 00:13:01.719 fused_ordering(967) 00:13:01.719 fused_ordering(968) 00:13:01.719 fused_ordering(969) 00:13:01.719 fused_ordering(970) 00:13:01.719 fused_ordering(971) 00:13:01.719 fused_ordering(972) 00:13:01.719 fused_ordering(973) 00:13:01.719 fused_ordering(974) 00:13:01.719 fused_ordering(975) 00:13:01.719 fused_ordering(976) 00:13:01.719 fused_ordering(977) 00:13:01.719 fused_ordering(978) 00:13:01.719 fused_ordering(979) 00:13:01.719 fused_ordering(980) 00:13:01.719 fused_ordering(981) 00:13:01.719 fused_ordering(982) 00:13:01.719 fused_ordering(983) 00:13:01.719 fused_ordering(984) 00:13:01.719 fused_ordering(985) 00:13:01.719 fused_ordering(986) 00:13:01.719 fused_ordering(987) 00:13:01.719 fused_ordering(988) 00:13:01.719 fused_ordering(989) 00:13:01.719 fused_ordering(990) 00:13:01.719 fused_ordering(991) 00:13:01.719 fused_ordering(992) 00:13:01.719 fused_ordering(993) 00:13:01.719 fused_ordering(994) 00:13:01.719 fused_ordering(995) 00:13:01.719 fused_ordering(996) 00:13:01.719 fused_ordering(997) 00:13:01.719 fused_ordering(998) 00:13:01.719 fused_ordering(999) 00:13:01.719 fused_ordering(1000) 00:13:01.719 fused_ordering(1001) 00:13:01.719 fused_ordering(1002) 00:13:01.719 fused_ordering(1003) 00:13:01.719 fused_ordering(1004) 00:13:01.719 fused_ordering(1005) 00:13:01.719 fused_ordering(1006) 00:13:01.719 fused_ordering(1007) 00:13:01.719 fused_ordering(1008) 00:13:01.719 fused_ordering(1009) 00:13:01.719 fused_ordering(1010) 00:13:01.719 fused_ordering(1011) 00:13:01.719 fused_ordering(1012) 00:13:01.719 fused_ordering(1013) 00:13:01.719 fused_ordering(1014) 00:13:01.719 fused_ordering(1015) 00:13:01.719 fused_ordering(1016) 00:13:01.719 fused_ordering(1017) 00:13:01.719 fused_ordering(1018) 00:13:01.719 fused_ordering(1019) 00:13:01.719 fused_ordering(1020) 00:13:01.719 fused_ordering(1021) 00:13:01.719 fused_ordering(1022) 00:13:01.719 fused_ordering(1023) 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.719 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.719 rmmod nvme_tcp 00:13:01.719 rmmod nvme_fabrics 00:13:01.719 rmmod nvme_keyring 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2951461 ']' 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2951461 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2951461 ']' 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2951461 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951461 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951461' 00:13:01.980 killing process with pid 2951461 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2951461 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2951461 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.980 17:49:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.522 00:13:04.522 real 0m10.163s 00:13:04.522 user 0m5.190s 00:13:04.522 sys 0m5.130s 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:04.522 ************************************ 00:13:04.522 END TEST nvmf_fused_ordering 00:13:04.522 ************************************ 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:04.522 ************************************ 00:13:04.522 START TEST nvmf_ns_masking 00:13:04.522 ************************************ 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:04.522 * Looking for test storage... 00:13:04.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:04.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.522 --rc genhtml_branch_coverage=1 00:13:04.522 --rc genhtml_function_coverage=1 00:13:04.522 --rc genhtml_legend=1 00:13:04.522 --rc geninfo_all_blocks=1 00:13:04.522 --rc geninfo_unexecuted_blocks=1 00:13:04.522 00:13:04.522 ' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:04.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.522 --rc genhtml_branch_coverage=1 00:13:04.522 --rc genhtml_function_coverage=1 00:13:04.522 --rc genhtml_legend=1 00:13:04.522 --rc geninfo_all_blocks=1 00:13:04.522 --rc geninfo_unexecuted_blocks=1 00:13:04.522 00:13:04.522 ' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:04.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.522 --rc genhtml_branch_coverage=1 00:13:04.522 --rc genhtml_function_coverage=1 00:13:04.522 --rc genhtml_legend=1 00:13:04.522 --rc geninfo_all_blocks=1 00:13:04.522 --rc geninfo_unexecuted_blocks=1 00:13:04.522 00:13:04.522 ' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:04.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.522 --rc genhtml_branch_coverage=1 00:13:04.522 --rc genhtml_function_coverage=1 00:13:04.522 --rc genhtml_legend=1 00:13:04.522 --rc geninfo_all_blocks=1 00:13:04.522 --rc geninfo_unexecuted_blocks=1 00:13:04.522 00:13:04.522 ' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.522 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6a94b268-dbde-4405-8314-a38a5ce6d6c4 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=70432114-47d8-4e5e-a47d-12841614f749 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d10c9a36-e14b-4a20-9364-d2fc0f6431b5 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.523 17:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:09.805 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:09.805 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.805 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:09.805 Found net devices under 0000:31:00.0: cvl_0_0 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:09.806 Found net devices under 0000:31:00.1: cvl_0_1 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:09.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:13:09.806 00:13:09.806 --- 10.0.0.2 ping statistics --- 00:13:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.806 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:13:09.806 00:13:09.806 --- 10.0.0.1 ping statistics --- 00:13:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.806 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2956416 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2956416 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2956416 ']' 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:09.806 17:49:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:09.806 [2024-12-06 17:49:57.450190] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:13:09.806 [2024-12-06 17:49:57.450239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.807 [2024-12-06 17:49:57.535504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.807 [2024-12-06 17:49:57.570850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.807 [2024-12-06 17:49:57.570886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.807 [2024-12-06 17:49:57.570894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.807 [2024-12-06 17:49:57.570901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.807 [2024-12-06 17:49:57.570907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.807 [2024-12-06 17:49:57.571498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:10.744 [2024-12-06 17:49:58.425506] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:10.744 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:11.002 Malloc1 00:13:11.002 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:11.002 Malloc2 00:13:11.261 17:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.261 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:11.519 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.519 [2024-12-06 17:49:59.304827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.519 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:11.519 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d10c9a36-e14b-4a20-9364-d2fc0f6431b5 -a 10.0.0.2 -s 4420 -i 4 00:13:11.778 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.778 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.778 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.778 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.778 17:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:13.766 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:13.767 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:13.767 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.767 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.767 [ 0]:0x1 00:13:13.767 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.767 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d30be8d31b24b5b9352a81ee5845013 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d30be8d31b24b5b9352a81ee5845013 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.024 [ 0]:0x1 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d30be8d31b24b5b9352a81ee5845013 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d30be8d31b24b5b9352a81ee5845013 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.024 [ 1]:0x2 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:14.024 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.282 17:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.282 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:14.541 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:14.541 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d10c9a36-e14b-4a20-9364-d2fc0f6431b5 -a 10.0.0.2 -s 4420 -i 4 00:13:14.799 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:14.799 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.799 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.799 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:14.799 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:14.799 17:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.703 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.962 [ 0]:0x2 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.962 [ 0]:0x1 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d30be8d31b24b5b9352a81ee5845013 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d30be8d31b24b5b9352a81ee5845013 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:16.962 [ 1]:0x2 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:16.962 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.221 17:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.221 [ 0]:0x2 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.221 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:17.480 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d10c9a36-e14b-4a20-9364-d2fc0f6431b5 -a 10.0.0.2 -s 4420 -i 4 00:13:17.738 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:17.738 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:17.738 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.738 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:17.738 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:17.738 17:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:19.643 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:19.902 [ 0]:0x1 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2d30be8d31b24b5b9352a81ee5845013 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2d30be8d31b24b5b9352a81ee5845013 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:19.902 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.161 [ 1]:0x2 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.161 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.162 [ 0]:0x2 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.162 17:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:20.421 [2024-12-06 17:50:08.160851] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:20.421 request: 00:13:20.421 { 00:13:20.421 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.421 "nsid": 2, 00:13:20.421 "host": "nqn.2016-06.io.spdk:host1", 00:13:20.421 "method": "nvmf_ns_remove_host", 00:13:20.421 "req_id": 1 00:13:20.421 } 00:13:20.421 Got JSON-RPC error response 00:13:20.421 response: 00:13:20.421 { 00:13:20.421 "code": -32602, 00:13:20.421 "message": "Invalid parameters" 00:13:20.421 } 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:20.421 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:20.422 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:20.422 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:20.422 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:20.422 [ 0]:0x2 00:13:20.422 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:20.422 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b2e17e6395ed4134926f896c944fb7f5 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b2e17e6395ed4134926f896c944fb7f5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:20.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2958991 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2958991 /var/tmp/host.sock 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2958991 ']' 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:20.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.681 17:50:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:20.681 [2024-12-06 17:50:08.343615] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:13:20.681 [2024-12-06 17:50:08.343665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958991 ] 00:13:20.681 [2024-12-06 17:50:08.420744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.681 [2024-12-06 17:50:08.456713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6a94b268-dbde-4405-8314-a38a5ce6d6c4 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:21.618 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6A94B268DBDE44058314A38A5CE6D6C4 -i 00:13:21.876 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 70432114-47d8-4e5e-a47d-12841614f749 00:13:21.876 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:21.877 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7043211447D84E5EA47D12841614F749 -i 00:13:22.136 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:22.136 17:50:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:22.394 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:22.395 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:22.654 nvme0n1 00:13:22.654 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:22.654 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:22.913 nvme1n2 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:22.913 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:23.172 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6a94b268-dbde-4405-8314-a38a5ce6d6c4 == \6\a\9\4\b\2\6\8\-\d\b\d\e\-\4\4\0\5\-\8\3\1\4\-\a\3\8\a\5\c\e\6\d\6\c\4 ]] 00:13:23.172 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:23.172 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:23.172 17:50:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:23.432 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 70432114-47d8-4e5e-a47d-12841614f749 == \7\0\4\3\2\1\1\4\-\4\7\d\8\-\4\e\5\e\-\a\4\7\d\-\1\2\8\4\1\6\1\4\f\7\4\9 ]] 00:13:23.432 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.432 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 6a94b268-dbde-4405-8314-a38a5ce6d6c4 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6A94B268DBDE44058314A38A5CE6D6C4 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6A94B268DBDE44058314A38A5CE6D6C4 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.691 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 6A94B268DBDE44058314A38A5CE6D6C4 00:13:23.692 [2024-12-06 17:50:11.497538] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:23.692 [2024-12-06 17:50:11.497567] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:23.692 [2024-12-06 17:50:11.497575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.692 request: 00:13:23.692 { 00:13:23.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.692 "namespace": { 00:13:23.692 "bdev_name": "invalid", 00:13:23.692 "nsid": 1, 00:13:23.692 "nguid": "6A94B268DBDE44058314A38A5CE6D6C4", 00:13:23.692 "no_auto_visible": false, 00:13:23.692 "hide_metadata": false 00:13:23.692 }, 00:13:23.692 "method": "nvmf_subsystem_add_ns", 00:13:23.692 "req_id": 1 00:13:23.692 } 00:13:23.692 Got JSON-RPC error response 00:13:23.692 response: 00:13:23.692 { 00:13:23.692 "code": -32602, 00:13:23.692 "message": "Invalid parameters" 00:13:23.692 } 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.692 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.949 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 6a94b268-dbde-4405-8314-a38a5ce6d6c4 00:13:23.949 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:23.949 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6A94B268DBDE44058314A38A5CE6D6C4 -i 00:13:23.949 17:50:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2958991 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2958991 ']' 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2958991 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958991 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958991' 00:13:26.483 killing process with pid 2958991 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2958991 00:13:26.483 17:50:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2958991 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.483 rmmod nvme_tcp 00:13:26.483 rmmod nvme_fabrics 00:13:26.483 rmmod nvme_keyring 00:13:26.483 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2956416 ']' 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2956416 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2956416 ']' 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2956416 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956416 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956416' 00:13:26.743 killing process with pid 2956416 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2956416 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2956416 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.743 17:50:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:29.274 00:13:29.274 real 0m24.713s 00:13:29.274 user 0m28.551s 00:13:29.274 sys 0m6.276s 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:29.274 ************************************ 00:13:29.274 END TEST nvmf_ns_masking 00:13:29.274 ************************************ 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.274 ************************************ 00:13:29.274 START TEST nvmf_nvme_cli 00:13:29.274 ************************************ 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:29.274 * Looking for test storage... 00:13:29.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.274 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.275 --rc genhtml_branch_coverage=1 00:13:29.275 --rc genhtml_function_coverage=1 00:13:29.275 --rc genhtml_legend=1 00:13:29.275 --rc geninfo_all_blocks=1 00:13:29.275 --rc geninfo_unexecuted_blocks=1 00:13:29.275 00:13:29.275 ' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.275 --rc genhtml_branch_coverage=1 00:13:29.275 --rc genhtml_function_coverage=1 00:13:29.275 --rc genhtml_legend=1 00:13:29.275 --rc geninfo_all_blocks=1 00:13:29.275 --rc geninfo_unexecuted_blocks=1 00:13:29.275 00:13:29.275 ' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.275 --rc genhtml_branch_coverage=1 00:13:29.275 --rc genhtml_function_coverage=1 00:13:29.275 --rc genhtml_legend=1 00:13:29.275 --rc geninfo_all_blocks=1 00:13:29.275 --rc geninfo_unexecuted_blocks=1 00:13:29.275 00:13:29.275 ' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:29.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:29.275 --rc genhtml_branch_coverage=1 00:13:29.275 --rc genhtml_function_coverage=1 00:13:29.275 --rc genhtml_legend=1 00:13:29.275 --rc geninfo_all_blocks=1 00:13:29.275 --rc geninfo_unexecuted_blocks=1 00:13:29.275 00:13:29.275 ' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:29.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.275 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:29.276 17:50:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:34.545 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:34.545 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:34.545 Found net devices under 0000:31:00.0: cvl_0_0 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:34.545 Found net devices under 0000:31:00.1: cvl_0_1 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.545 17:50:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.545 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:13:34.545 00:13:34.545 --- 10.0.0.2 ping statistics --- 00:13:34.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.546 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:13:34.546 00:13:34.546 --- 10.0.0.1 ping statistics --- 00:13:34.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.546 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2964720 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2964720 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2964720 ']' 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.546 17:50:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:34.546 [2024-12-06 17:50:22.266397] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:13:34.546 [2024-12-06 17:50:22.266464] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.546 [2024-12-06 17:50:22.361483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.805 [2024-12-06 17:50:22.416678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.805 [2024-12-06 17:50:22.416735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.805 [2024-12-06 17:50:22.416744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.805 [2024-12-06 17:50:22.416751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.805 [2024-12-06 17:50:22.416757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.805 [2024-12-06 17:50:22.418900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.805 [2024-12-06 17:50:22.419058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.805 [2024-12-06 17:50:22.419217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.805 [2024-12-06 17:50:22.419219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 [2024-12-06 17:50:23.100934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 Malloc0 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 Malloc1 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 [2024-12-06 17:50:23.181580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.372 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:13:35.631 00:13:35.631 Discovery Log Number of Records 2, Generation counter 2 00:13:35.631 =====Discovery Log Entry 0====== 00:13:35.631 trtype: tcp 00:13:35.631 adrfam: ipv4 00:13:35.631 subtype: current discovery subsystem 00:13:35.631 treq: not required 00:13:35.631 portid: 0 00:13:35.631 trsvcid: 4420 00:13:35.631 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:35.631 traddr: 10.0.0.2 00:13:35.631 eflags: explicit discovery connections, duplicate discovery information 00:13:35.631 sectype: none 00:13:35.631 =====Discovery Log Entry 1====== 00:13:35.631 trtype: tcp 00:13:35.631 adrfam: ipv4 00:13:35.631 subtype: nvme subsystem 00:13:35.631 treq: not required 00:13:35.631 portid: 0 00:13:35.631 trsvcid: 4420 00:13:35.631 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:35.631 traddr: 10.0.0.2 00:13:35.631 eflags: none 00:13:35.631 sectype: none 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:35.631 17:50:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.535 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:37.535 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:37.535 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.535 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:37.535 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:37.535 17:50:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:39.437 /dev/nvme0n2 ]] 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:39.437 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:39.438 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.438 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:39.438 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.438 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:39.438 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.438 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.438 rmmod nvme_tcp 00:13:39.696 rmmod nvme_fabrics 00:13:39.696 rmmod nvme_keyring 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2964720 ']' 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2964720 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2964720 ']' 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2964720 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2964720 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2964720' 00:13:39.696 killing process with pid 2964720 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2964720 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2964720 00:13:39.696 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.697 17:50:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.239 00:13:42.239 real 0m12.964s 00:13:42.239 user 0m21.743s 00:13:42.239 sys 0m4.744s 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:42.239 ************************************ 00:13:42.239 END TEST nvmf_nvme_cli 00:13:42.239 ************************************ 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.239 ************************************ 00:13:42.239 START TEST nvmf_vfio_user 00:13:42.239 ************************************ 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:42.239 * Looking for test storage... 00:13:42.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.239 --rc genhtml_branch_coverage=1 00:13:42.239 --rc genhtml_function_coverage=1 00:13:42.239 --rc genhtml_legend=1 00:13:42.239 --rc geninfo_all_blocks=1 00:13:42.239 --rc geninfo_unexecuted_blocks=1 00:13:42.239 00:13:42.239 ' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.239 --rc genhtml_branch_coverage=1 00:13:42.239 --rc genhtml_function_coverage=1 00:13:42.239 --rc genhtml_legend=1 00:13:42.239 --rc geninfo_all_blocks=1 00:13:42.239 --rc geninfo_unexecuted_blocks=1 00:13:42.239 00:13:42.239 ' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.239 --rc genhtml_branch_coverage=1 00:13:42.239 --rc genhtml_function_coverage=1 00:13:42.239 --rc genhtml_legend=1 00:13:42.239 --rc geninfo_all_blocks=1 00:13:42.239 --rc geninfo_unexecuted_blocks=1 00:13:42.239 00:13:42.239 ' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:42.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.239 --rc genhtml_branch_coverage=1 00:13:42.239 --rc genhtml_function_coverage=1 00:13:42.239 --rc genhtml_legend=1 00:13:42.239 --rc geninfo_all_blocks=1 00:13:42.239 --rc geninfo_unexecuted_blocks=1 00:13:42.239 00:13:42.239 ' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.239 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2966528 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2966528' 00:13:42.240 Process pid: 2966528 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2966528 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2966528 ']' 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:42.240 [2024-12-06 17:50:29.775911] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:13:42.240 [2024-12-06 17:50:29.775964] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.240 [2024-12-06 17:50:29.842153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.240 [2024-12-06 17:50:29.872232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.240 [2024-12-06 17:50:29.872263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.240 [2024-12-06 17:50:29.872269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.240 [2024-12-06 17:50:29.872275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.240 [2024-12-06 17:50:29.872280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.240 [2024-12-06 17:50:29.873783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.240 [2024-12-06 17:50:29.873924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.240 [2024-12-06 17:50:29.874072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.240 [2024-12-06 17:50:29.874074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:42.240 17:50:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:43.180 17:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:43.439 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:43.439 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:43.439 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.439 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:43.439 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:43.698 Malloc1 00:13:43.698 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:43.698 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:43.956 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:43.956 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:43.957 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:43.957 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:44.215 Malloc2 00:13:44.215 17:50:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:44.474 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:44.474 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:44.737 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:44.737 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:44.737 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:44.737 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.737 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.737 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:44.737 [2024-12-06 17:50:32.431369] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:13:44.737 [2024-12-06 17:50:32.431398] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967032 ] 00:13:44.737 [2024-12-06 17:50:32.466801] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:44.737 [2024-12-06 17:50:32.476090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.737 [2024-12-06 17:50:32.476111] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f014cc2b000 00:13:44.737 [2024-12-06 17:50:32.477088] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.478084] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.479102] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.480099] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.481112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.482114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.483115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.484127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:44.737 [2024-12-06 17:50:32.485137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:44.737 [2024-12-06 17:50:32.485144] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f014cc20000 00:13:44.737 [2024-12-06 17:50:32.486053] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.737 [2024-12-06 17:50:32.495508] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:44.737 [2024-12-06 17:50:32.495534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:44.737 [2024-12-06 17:50:32.500223] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.737 [2024-12-06 17:50:32.500256] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:44.737 [2024-12-06 17:50:32.500317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:44.737 [2024-12-06 17:50:32.500330] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:44.738 [2024-12-06 17:50:32.500334] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:44.738 [2024-12-06 17:50:32.501223] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:44.738 [2024-12-06 17:50:32.501233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:44.738 [2024-12-06 17:50:32.501239] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:44.738 [2024-12-06 17:50:32.502230] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:44.738 [2024-12-06 17:50:32.502237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:44.738 [2024-12-06 17:50:32.502243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:44.738 [2024-12-06 17:50:32.503235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:44.738 [2024-12-06 17:50:32.503242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:44.738 [2024-12-06 17:50:32.504242] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:44.738 [2024-12-06 17:50:32.504249] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:44.738 [2024-12-06 17:50:32.504252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:44.738 [2024-12-06 17:50:32.504257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:44.738 [2024-12-06 17:50:32.504364] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:44.738 [2024-12-06 17:50:32.504367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:44.738 [2024-12-06 17:50:32.504373] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:44.738 [2024-12-06 17:50:32.505246] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:44.738 [2024-12-06 17:50:32.506249] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:44.738 [2024-12-06 17:50:32.507265] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.738 [2024-12-06 17:50:32.508260] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.738 [2024-12-06 17:50:32.508321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:44.738 [2024-12-06 17:50:32.509266] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:44.738 [2024-12-06 17:50:32.509272] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:44.738 [2024-12-06 17:50:32.509276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509291] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:44.738 [2024-12-06 17:50:32.509296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509312] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.738 [2024-12-06 17:50:32.509316] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.738 [2024-12-06 17:50:32.509319] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509377] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:44.738 [2024-12-06 17:50:32.509380] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:44.738 [2024-12-06 17:50:32.509383] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:44.738 [2024-12-06 17:50:32.509387] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:44.738 [2024-12-06 17:50:32.509391] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:44.738 [2024-12-06 17:50:32.509395] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:44.738 [2024-12-06 17:50:32.509398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.738 [2024-12-06 17:50:32.509436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.738 [2024-12-06 17:50:32.509442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.738 [2024-12-06 17:50:32.509448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:44.738 [2024-12-06 17:50:32.509451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509480] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:44.738 [2024-12-06 17:50:32.509484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509564] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:44.738 [2024-12-06 17:50:32.509567] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:44.738 [2024-12-06 17:50:32.509569] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509598] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:44.738 [2024-12-06 17:50:32.509604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509617] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.738 [2024-12-06 17:50:32.509620] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.738 [2024-12-06 17:50:32.509622] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509669] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:44.738 [2024-12-06 17:50:32.509672] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.738 [2024-12-06 17:50:32.509675] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509717] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509725] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:44.738 [2024-12-06 17:50:32.509728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:44.738 [2024-12-06 17:50:32.509732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:44.738 [2024-12-06 17:50:32.509747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:44.738 [2024-12-06 17:50:32.509814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:44.738 [2024-12-06 17:50:32.509817] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:44.738 [2024-12-06 17:50:32.509820] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:44.738 [2024-12-06 17:50:32.509823] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:44.738 [2024-12-06 17:50:32.509825] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:44.738 [2024-12-06 17:50:32.509830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:44.738 [2024-12-06 17:50:32.509835] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:44.738 [2024-12-06 17:50:32.509838] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:44.738 [2024-12-06 17:50:32.509841] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509850] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:44.738 [2024-12-06 17:50:32.509853] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:44.738 [2024-12-06 17:50:32.509855] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509866] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:44.738 [2024-12-06 17:50:32.509869] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:44.738 [2024-12-06 17:50:32.509871] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:44.738 [2024-12-06 17:50:32.509875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:44.738 [2024-12-06 17:50:32.509880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.509889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.509897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.509902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:44.739 ===================================================== 00:13:44.739 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:44.739 ===================================================== 00:13:44.739 Controller Capabilities/Features 00:13:44.739 ================================ 00:13:44.739 Vendor ID: 4e58 00:13:44.739 Subsystem Vendor ID: 4e58 00:13:44.739 Serial Number: SPDK1 00:13:44.739 Model Number: SPDK bdev Controller 00:13:44.739 Firmware Version: 25.01 00:13:44.739 Recommended Arb Burst: 6 00:13:44.739 IEEE OUI Identifier: 8d 6b 50 00:13:44.739 Multi-path I/O 00:13:44.739 May have multiple subsystem ports: Yes 00:13:44.739 May have multiple controllers: Yes 00:13:44.739 Associated with SR-IOV VF: No 00:13:44.739 Max Data Transfer Size: 131072 00:13:44.739 Max Number of Namespaces: 32 00:13:44.739 Max Number of I/O Queues: 127 00:13:44.739 NVMe Specification Version (VS): 1.3 00:13:44.739 NVMe Specification Version (Identify): 1.3 00:13:44.739 Maximum Queue Entries: 256 00:13:44.739 Contiguous Queues Required: Yes 00:13:44.739 Arbitration Mechanisms Supported 00:13:44.739 Weighted Round Robin: Not Supported 00:13:44.739 Vendor Specific: Not Supported 00:13:44.739 Reset Timeout: 15000 ms 00:13:44.739 Doorbell Stride: 4 bytes 00:13:44.739 NVM Subsystem Reset: Not Supported 00:13:44.739 Command Sets Supported 00:13:44.739 NVM Command Set: Supported 00:13:44.739 Boot Partition: Not Supported 00:13:44.739 Memory Page Size Minimum: 4096 bytes 00:13:44.739 Memory Page Size Maximum: 4096 bytes 00:13:44.739 Persistent Memory Region: Not Supported 00:13:44.739 Optional Asynchronous Events Supported 00:13:44.739 Namespace Attribute Notices: Supported 00:13:44.739 Firmware Activation Notices: Not Supported 00:13:44.739 ANA Change Notices: Not Supported 00:13:44.739 PLE Aggregate Log Change Notices: Not Supported 00:13:44.739 LBA Status Info Alert Notices: Not Supported 00:13:44.739 EGE Aggregate Log Change Notices: Not Supported 00:13:44.739 Normal NVM Subsystem Shutdown event: Not Supported 00:13:44.739 Zone Descriptor Change Notices: Not Supported 00:13:44.739 Discovery Log Change Notices: Not Supported 00:13:44.739 Controller Attributes 00:13:44.739 128-bit Host Identifier: Supported 00:13:44.739 Non-Operational Permissive Mode: Not Supported 00:13:44.739 NVM Sets: Not Supported 00:13:44.739 Read Recovery Levels: Not Supported 00:13:44.739 Endurance Groups: Not Supported 00:13:44.739 Predictable Latency Mode: Not Supported 00:13:44.739 Traffic Based Keep ALive: Not Supported 00:13:44.739 Namespace Granularity: Not Supported 00:13:44.739 SQ Associations: Not Supported 00:13:44.739 UUID List: Not Supported 00:13:44.739 Multi-Domain Subsystem: Not Supported 00:13:44.739 Fixed Capacity Management: Not Supported 00:13:44.739 Variable Capacity Management: Not Supported 00:13:44.739 Delete Endurance Group: Not Supported 00:13:44.739 Delete NVM Set: Not Supported 00:13:44.739 Extended LBA Formats Supported: Not Supported 00:13:44.739 Flexible Data Placement Supported: Not Supported 00:13:44.739 00:13:44.739 Controller Memory Buffer Support 00:13:44.739 ================================ 00:13:44.739 Supported: No 00:13:44.739 00:13:44.739 Persistent Memory Region Support 00:13:44.739 ================================ 00:13:44.739 Supported: No 00:13:44.739 00:13:44.739 Admin Command Set Attributes 00:13:44.739 ============================ 00:13:44.739 Security Send/Receive: Not Supported 00:13:44.739 Format NVM: Not Supported 00:13:44.739 Firmware Activate/Download: Not Supported 00:13:44.739 Namespace Management: Not Supported 00:13:44.739 Device Self-Test: Not Supported 00:13:44.739 Directives: Not Supported 00:13:44.739 NVMe-MI: Not Supported 00:13:44.739 Virtualization Management: Not Supported 00:13:44.739 Doorbell Buffer Config: Not Supported 00:13:44.739 Get LBA Status Capability: Not Supported 00:13:44.739 Command & Feature Lockdown Capability: Not Supported 00:13:44.739 Abort Command Limit: 4 00:13:44.739 Async Event Request Limit: 4 00:13:44.739 Number of Firmware Slots: N/A 00:13:44.739 Firmware Slot 1 Read-Only: N/A 00:13:44.739 Firmware Activation Without Reset: N/A 00:13:44.739 Multiple Update Detection Support: N/A 00:13:44.739 Firmware Update Granularity: No Information Provided 00:13:44.739 Per-Namespace SMART Log: No 00:13:44.739 Asymmetric Namespace Access Log Page: Not Supported 00:13:44.739 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:44.739 Command Effects Log Page: Supported 00:13:44.739 Get Log Page Extended Data: Supported 00:13:44.739 Telemetry Log Pages: Not Supported 00:13:44.739 Persistent Event Log Pages: Not Supported 00:13:44.739 Supported Log Pages Log Page: May Support 00:13:44.739 Commands Supported & Effects Log Page: Not Supported 00:13:44.739 Feature Identifiers & Effects Log Page:May Support 00:13:44.739 NVMe-MI Commands & Effects Log Page: May Support 00:13:44.739 Data Area 4 for Telemetry Log: Not Supported 00:13:44.739 Error Log Page Entries Supported: 128 00:13:44.739 Keep Alive: Supported 00:13:44.739 Keep Alive Granularity: 10000 ms 00:13:44.739 00:13:44.739 NVM Command Set Attributes 00:13:44.739 ========================== 00:13:44.739 Submission Queue Entry Size 00:13:44.739 Max: 64 00:13:44.739 Min: 64 00:13:44.739 Completion Queue Entry Size 00:13:44.739 Max: 16 00:13:44.739 Min: 16 00:13:44.739 Number of Namespaces: 32 00:13:44.739 Compare Command: Supported 00:13:44.739 Write Uncorrectable Command: Not Supported 00:13:44.739 Dataset Management Command: Supported 00:13:44.739 Write Zeroes Command: Supported 00:13:44.739 Set Features Save Field: Not Supported 00:13:44.739 Reservations: Not Supported 00:13:44.739 Timestamp: Not Supported 00:13:44.739 Copy: Supported 00:13:44.739 Volatile Write Cache: Present 00:13:44.739 Atomic Write Unit (Normal): 1 00:13:44.739 Atomic Write Unit (PFail): 1 00:13:44.739 Atomic Compare & Write Unit: 1 00:13:44.739 Fused Compare & Write: Supported 00:13:44.739 Scatter-Gather List 00:13:44.739 SGL Command Set: Supported (Dword aligned) 00:13:44.739 SGL Keyed: Not Supported 00:13:44.739 SGL Bit Bucket Descriptor: Not Supported 00:13:44.739 SGL Metadata Pointer: Not Supported 00:13:44.739 Oversized SGL: Not Supported 00:13:44.739 SGL Metadata Address: Not Supported 00:13:44.739 SGL Offset: Not Supported 00:13:44.739 Transport SGL Data Block: Not Supported 00:13:44.739 Replay Protected Memory Block: Not Supported 00:13:44.739 00:13:44.739 Firmware Slot Information 00:13:44.739 ========================= 00:13:44.739 Active slot: 1 00:13:44.739 Slot 1 Firmware Revision: 25.01 00:13:44.739 00:13:44.739 00:13:44.739 Commands Supported and Effects 00:13:44.739 ============================== 00:13:44.739 Admin Commands 00:13:44.739 -------------- 00:13:44.739 Get Log Page (02h): Supported 00:13:44.739 Identify (06h): Supported 00:13:44.739 Abort (08h): Supported 00:13:44.739 Set Features (09h): Supported 00:13:44.739 Get Features (0Ah): Supported 00:13:44.739 Asynchronous Event Request (0Ch): Supported 00:13:44.739 Keep Alive (18h): Supported 00:13:44.739 I/O Commands 00:13:44.739 ------------ 00:13:44.739 Flush (00h): Supported LBA-Change 00:13:44.739 Write (01h): Supported LBA-Change 00:13:44.739 Read (02h): Supported 00:13:44.739 Compare (05h): Supported 00:13:44.739 Write Zeroes (08h): Supported LBA-Change 00:13:44.739 Dataset Management (09h): Supported LBA-Change 00:13:44.739 Copy (19h): Supported LBA-Change 00:13:44.739 00:13:44.739 Error Log 00:13:44.739 ========= 00:13:44.739 00:13:44.739 Arbitration 00:13:44.739 =========== 00:13:44.739 Arbitration Burst: 1 00:13:44.739 00:13:44.739 Power Management 00:13:44.739 ================ 00:13:44.739 Number of Power States: 1 00:13:44.739 Current Power State: Power State #0 00:13:44.739 Power State #0: 00:13:44.739 Max Power: 0.00 W 00:13:44.739 Non-Operational State: Operational 00:13:44.739 Entry Latency: Not Reported 00:13:44.739 Exit Latency: Not Reported 00:13:44.739 Relative Read Throughput: 0 00:13:44.739 Relative Read Latency: 0 00:13:44.739 Relative Write Throughput: 0 00:13:44.739 Relative Write Latency: 0 00:13:44.739 Idle Power: Not Reported 00:13:44.739 Active Power: Not Reported 00:13:44.739 Non-Operational Permissive Mode: Not Supported 00:13:44.739 00:13:44.739 Health Information 00:13:44.739 ================== 00:13:44.739 Critical Warnings: 00:13:44.739 Available Spare Space: OK 00:13:44.739 Temperature: OK 00:13:44.739 Device Reliability: OK 00:13:44.739 Read Only: No 00:13:44.739 Volatile Memory Backup: OK 00:13:44.739 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:44.739 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:44.739 Available Spare: 0% 00:13:44.739 Available Sp[2024-12-06 17:50:32.509975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:44.739 [2024-12-06 17:50:32.509985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.510008] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:44.739 [2024-12-06 17:50:32.510015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.510022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.510027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.510031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:44.739 [2024-12-06 17:50:32.510274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:44.739 [2024-12-06 17:50:32.510282] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:44.739 [2024-12-06 17:50:32.511280] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.739 [2024-12-06 17:50:32.511321] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:44.739 [2024-12-06 17:50:32.511326] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:44.739 [2024-12-06 17:50:32.512285] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:44.739 [2024-12-06 17:50:32.512294] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:44.739 [2024-12-06 17:50:32.512343] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:44.739 [2024-12-06 17:50:32.513313] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:44.739 are Threshold: 0% 00:13:44.739 Life Percentage Used: 0% 00:13:44.739 Data Units Read: 0 00:13:44.739 Data Units Written: 0 00:13:44.739 Host Read Commands: 0 00:13:44.739 Host Write Commands: 0 00:13:44.739 Controller Busy Time: 0 minutes 00:13:44.739 Power Cycles: 0 00:13:44.739 Power On Hours: 0 hours 00:13:44.739 Unsafe Shutdowns: 0 00:13:44.739 Unrecoverable Media Errors: 0 00:13:44.739 Lifetime Error Log Entries: 0 00:13:44.739 Warning Temperature Time: 0 minutes 00:13:44.739 Critical Temperature Time: 0 minutes 00:13:44.739 00:13:44.739 Number of Queues 00:13:44.739 ================ 00:13:44.739 Number of I/O Submission Queues: 127 00:13:44.739 Number of I/O Completion Queues: 127 00:13:44.740 00:13:44.740 Active Namespaces 00:13:44.740 ================= 00:13:44.740 Namespace ID:1 00:13:44.740 Error Recovery Timeout: Unlimited 00:13:44.740 Command Set Identifier: NVM (00h) 00:13:44.740 Deallocate: Supported 00:13:44.740 Deallocated/Unwritten Error: Not Supported 00:13:44.740 Deallocated Read Value: Unknown 00:13:44.740 Deallocate in Write Zeroes: Not Supported 00:13:44.740 Deallocated Guard Field: 0xFFFF 00:13:44.740 Flush: Supported 00:13:44.740 Reservation: Supported 00:13:44.740 Namespace Sharing Capabilities: Multiple Controllers 00:13:44.740 Size (in LBAs): 131072 (0GiB) 00:13:44.740 Capacity (in LBAs): 131072 (0GiB) 00:13:44.740 Utilization (in LBAs): 131072 (0GiB) 00:13:44.740 NGUID: A3B2132E83DB4969A2DFFB1D75E6B1A7 00:13:44.740 UUID: a3b2132e-83db-4969-a2df-fb1d75e6b1a7 00:13:44.740 Thin Provisioning: Not Supported 00:13:44.740 Per-NS Atomic Units: Yes 00:13:44.740 Atomic Boundary Size (Normal): 0 00:13:44.740 Atomic Boundary Size (PFail): 0 00:13:44.740 Atomic Boundary Offset: 0 00:13:44.740 Maximum Single Source Range Length: 65535 00:13:44.740 Maximum Copy Length: 65535 00:13:44.740 Maximum Source Range Count: 1 00:13:44.740 NGUID/EUI64 Never Reused: No 00:13:44.740 Namespace Write Protected: No 00:13:44.740 Number of LBA Formats: 1 00:13:44.740 Current LBA Format: LBA Format #00 00:13:44.740 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:44.740 00:13:44.740 17:50:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:44.997 [2024-12-06 17:50:32.685715] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.267 Initializing NVMe Controllers 00:13:50.267 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:50.267 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:50.267 Initialization complete. Launching workers. 00:13:50.267 ======================================================== 00:13:50.267 Latency(us) 00:13:50.267 Device Information : IOPS MiB/s Average min max 00:13:50.267 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39978.30 156.17 3201.62 866.03 7734.28 00:13:50.267 ======================================================== 00:13:50.267 Total : 39978.30 156.17 3201.62 866.03 7734.28 00:13:50.267 00:13:50.267 [2024-12-06 17:50:37.705925] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.267 17:50:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:50.267 [2024-12-06 17:50:37.877747] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:55.534 Initializing NVMe Controllers 00:13:55.534 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:55.534 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:55.534 Initialization complete. Launching workers. 00:13:55.534 ======================================================== 00:13:55.534 Latency(us) 00:13:55.534 Device Information : IOPS MiB/s Average min max 00:13:55.534 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16056.32 62.72 7977.50 4985.74 10975.77 00:13:55.534 ======================================================== 00:13:55.534 Total : 16056.32 62.72 7977.50 4985.74 10975.77 00:13:55.534 00:13:55.534 [2024-12-06 17:50:42.916730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:55.534 17:50:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:55.534 [2024-12-06 17:50:43.120622] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:00.825 [2024-12-06 17:50:48.231464] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:00.825 Initializing NVMe Controllers 00:14:00.825 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.825 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:00.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:00.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:00.825 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:00.825 Initialization complete. Launching workers. 00:14:00.825 Starting thread on core 2 00:14:00.825 Starting thread on core 3 00:14:00.825 Starting thread on core 1 00:14:00.825 17:50:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:00.825 [2024-12-06 17:50:48.468342] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.114 [2024-12-06 17:50:51.529318] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.114 Initializing NVMe Controllers 00:14:04.114 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.114 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:04.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:04.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:04.114 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:04.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:04.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:04.114 Initialization complete. Launching workers. 00:14:04.114 Starting thread on core 1 with urgent priority queue 00:14:04.114 Starting thread on core 2 with urgent priority queue 00:14:04.114 Starting thread on core 3 with urgent priority queue 00:14:04.114 Starting thread on core 0 with urgent priority queue 00:14:04.114 SPDK bdev Controller (SPDK1 ) core 0: 8989.00 IO/s 11.12 secs/100000 ios 00:14:04.114 SPDK bdev Controller (SPDK1 ) core 1: 13745.00 IO/s 7.28 secs/100000 ios 00:14:04.114 SPDK bdev Controller (SPDK1 ) core 2: 9147.00 IO/s 10.93 secs/100000 ios 00:14:04.114 SPDK bdev Controller (SPDK1 ) core 3: 12331.00 IO/s 8.11 secs/100000 ios 00:14:04.114 ======================================================== 00:14:04.114 00:14:04.114 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:04.114 [2024-12-06 17:50:51.760245] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:04.114 Initializing NVMe Controllers 00:14:04.114 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.114 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.114 Namespace ID: 1 size: 0GB 00:14:04.114 Initialization complete. 00:14:04.114 INFO: using host memory buffer for IO 00:14:04.114 Hello world! 00:14:04.114 [2024-12-06 17:50:51.794452] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.114 17:50:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:04.373 [2024-12-06 17:50:52.021432] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.311 Initializing NVMe Controllers 00:14:05.311 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.311 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.311 Initialization complete. Launching workers. 00:14:05.311 submit (in ns) avg, min, max = 5825.7, 2835.8, 3998053.3 00:14:05.311 complete (in ns) avg, min, max = 19506.8, 1629.2, 4084931.7 00:14:05.311 00:14:05.311 Submit histogram 00:14:05.311 ================ 00:14:05.311 Range in us Cumulative Count 00:14:05.311 2.827 - 2.840: 0.0305% ( 6) 00:14:05.311 2.840 - 2.853: 0.5177% ( 96) 00:14:05.311 2.853 - 2.867: 1.6598% ( 225) 00:14:05.311 2.867 - 2.880: 3.8424% ( 430) 00:14:05.311 2.880 - 2.893: 7.7509% ( 770) 00:14:05.311 2.893 - 2.907: 13.0552% ( 1045) 00:14:05.311 2.907 - 2.920: 19.2427% ( 1219) 00:14:05.311 2.920 - 2.933: 26.0393% ( 1339) 00:14:05.311 2.933 - 2.947: 32.0441% ( 1183) 00:14:05.311 2.947 - 2.960: 37.5565% ( 1086) 00:14:05.311 2.960 - 2.973: 44.3226% ( 1333) 00:14:05.311 2.973 - 2.987: 51.3578% ( 1386) 00:14:05.311 2.987 - 3.000: 59.1087% ( 1527) 00:14:05.311 3.000 - 3.013: 67.2047% ( 1595) 00:14:05.311 3.013 - 3.027: 75.5951% ( 1653) 00:14:05.311 3.027 - 3.040: 83.3816% ( 1534) 00:14:05.311 3.040 - 3.053: 89.6401% ( 1233) 00:14:05.311 3.053 - 3.067: 93.6958% ( 799) 00:14:05.311 3.067 - 3.080: 96.1068% ( 475) 00:14:05.311 3.080 - 3.093: 97.7818% ( 330) 00:14:05.311 3.093 - 3.107: 98.8376% ( 208) 00:14:05.311 3.107 - 3.120: 99.3046% ( 92) 00:14:05.311 3.120 - 3.133: 99.5482% ( 48) 00:14:05.311 3.133 - 3.147: 99.6396% ( 18) 00:14:05.311 3.147 - 3.160: 99.6548% ( 3) 00:14:05.311 3.173 - 3.187: 99.6650% ( 2) 00:14:05.311 3.253 - 3.267: 99.6701% ( 1) 00:14:05.311 3.413 - 3.440: 99.6751% ( 1) 00:14:05.311 3.600 - 3.627: 99.6802% ( 1) 00:14:05.311 3.707 - 3.733: 99.6853% ( 1) 00:14:05.311 3.760 - 3.787: 99.6904% ( 1) 00:14:05.311 3.813 - 3.840: 99.6954% ( 1) 00:14:05.311 4.000 - 4.027: 99.7005% ( 1) 00:14:05.311 4.133 - 4.160: 99.7056% ( 1) 00:14:05.311 4.400 - 4.427: 99.7107% ( 1) 00:14:05.311 4.587 - 4.613: 99.7158% ( 1) 00:14:05.311 4.640 - 4.667: 99.7208% ( 1) 00:14:05.311 4.667 - 4.693: 99.7259% ( 1) 00:14:05.311 4.720 - 4.747: 99.7310% ( 1) 00:14:05.311 4.773 - 4.800: 99.7361% ( 1) 00:14:05.311 4.800 - 4.827: 99.7411% ( 1) 00:14:05.311 4.827 - 4.853: 99.7462% ( 1) 00:14:05.311 4.907 - 4.933: 99.7513% ( 1) 00:14:05.311 4.987 - 5.013: 99.7564% ( 1) 00:14:05.311 5.253 - 5.280: 99.7614% ( 1) 00:14:05.311 5.307 - 5.333: 99.7716% ( 2) 00:14:05.311 5.387 - 5.413: 99.7817% ( 2) 00:14:05.311 5.413 - 5.440: 99.7868% ( 1) 00:14:05.311 5.440 - 5.467: 99.7970% ( 2) 00:14:05.311 5.573 - 5.600: 99.8020% ( 1) 00:14:05.311 5.627 - 5.653: 99.8071% ( 1) 00:14:05.311 5.707 - 5.733: 99.8122% ( 1) 00:14:05.311 5.760 - 5.787: 99.8223% ( 2) 00:14:05.311 5.787 - 5.813: 99.8274% ( 1) 00:14:05.311 5.813 - 5.840: 99.8325% ( 1) 00:14:05.311 5.840 - 5.867: 99.8426% ( 2) 00:14:05.311 5.947 - 5.973: 99.8477% ( 1) 00:14:05.311 6.053 - 6.080: 99.8528% ( 1) 00:14:05.311 6.107 - 6.133: 99.8579% ( 1) 00:14:05.311 6.187 - 6.213: 99.8630% ( 1) 00:14:05.311 6.240 - 6.267: 99.8680% ( 1) 00:14:05.311 6.267 - 6.293: 99.8731% ( 1) 00:14:05.311 6.293 - 6.320: 99.8782% ( 1) 00:14:05.311 6.373 - 6.400: 99.8833% ( 1) 00:14:05.311 6.427 - 6.453: 99.8883% ( 1) 00:14:05.311 6.453 - 6.480: 99.8934% ( 1) 00:14:05.311 6.533 - 6.560: 99.8985% ( 1) 00:14:05.311 6.587 - 6.613: 99.9036% ( 1) 00:14:05.311 6.747 - 6.773: 99.9086% ( 1) 00:14:05.311 6.987 - 7.040: 99.9137% ( 1) 00:14:05.311 8.427 - 8.480: 99.9188% ( 1) 00:14:05.311 10.933 - 10.987: 99.9239% ( 1) 00:14:05.311 35.413 - 35.627: 99.9289% ( 1) 00:14:05.311 3986.773 - 4014.080: 100.0000% ( 14) 00:14:05.311 00:14:05.311 Complete histogram 00:14:05.311 ================== 00:14:05.311 Range in us Cumulative Count 00:14:05.311 1.627 - 1.633: 0.0051% ( 1) 00:14:05.311 1.633 - 1.640: 0.0152% ( 2) 00:14:05.311 1.640 - 1.647: 1.0507% ( 204) 00:14:05.311 1.647 - [2024-12-06 17:50:53.040159] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.311 1.653: 1.2334% ( 36) 00:14:05.311 1.653 - 1.660: 1.2690% ( 7) 00:14:05.311 1.660 - 1.667: 1.4060% ( 27) 00:14:05.311 1.667 - 1.673: 1.4771% ( 14) 00:14:05.311 1.673 - 1.680: 10.4309% ( 1764) 00:14:05.311 1.680 - 1.687: 45.4545% ( 6900) 00:14:05.311 1.687 - 1.693: 48.0382% ( 509) 00:14:05.311 1.693 - 1.700: 61.7075% ( 2693) 00:14:05.311 1.700 - 1.707: 74.8947% ( 2598) 00:14:05.311 1.707 - 1.720: 82.9603% ( 1589) 00:14:05.311 1.720 - 1.733: 84.3967% ( 283) 00:14:05.312 1.733 - 1.747: 87.2545% ( 563) 00:14:05.312 1.747 - 1.760: 92.2136% ( 977) 00:14:05.312 1.760 - 1.773: 96.3606% ( 817) 00:14:05.312 1.773 - 1.787: 98.5686% ( 435) 00:14:05.312 1.787 - 1.800: 99.2082% ( 126) 00:14:05.312 1.800 - 1.813: 99.3807% ( 34) 00:14:05.312 1.813 - 1.827: 99.4010% ( 4) 00:14:05.312 1.827 - 1.840: 99.4061% ( 1) 00:14:05.312 1.933 - 1.947: 99.4112% ( 1) 00:14:05.312 3.440 - 3.467: 99.4163% ( 1) 00:14:05.312 3.493 - 3.520: 99.4264% ( 2) 00:14:05.312 3.600 - 3.627: 99.4315% ( 1) 00:14:05.312 3.733 - 3.760: 99.4366% ( 1) 00:14:05.312 3.947 - 3.973: 99.4417% ( 1) 00:14:05.312 4.000 - 4.027: 99.4467% ( 1) 00:14:05.312 4.240 - 4.267: 99.4518% ( 1) 00:14:05.312 4.293 - 4.320: 99.4569% ( 1) 00:14:05.312 4.320 - 4.347: 99.4620% ( 1) 00:14:05.312 4.347 - 4.373: 99.4670% ( 1) 00:14:05.312 4.507 - 4.533: 99.4721% ( 1) 00:14:05.312 4.533 - 4.560: 99.4772% ( 1) 00:14:05.312 4.667 - 4.693: 99.4823% ( 1) 00:14:05.312 4.800 - 4.827: 99.4873% ( 1) 00:14:05.312 4.853 - 4.880: 99.4924% ( 1) 00:14:05.312 4.933 - 4.960: 99.4975% ( 1) 00:14:05.312 4.960 - 4.987: 99.5026% ( 1) 00:14:05.312 5.013 - 5.040: 99.5076% ( 1) 00:14:05.312 5.120 - 5.147: 99.5127% ( 1) 00:14:05.312 5.227 - 5.253: 99.5178% ( 1) 00:14:05.312 5.467 - 5.493: 99.5229% ( 1) 00:14:05.312 5.627 - 5.653: 99.5279% ( 1) 00:14:05.312 5.733 - 5.760: 99.5330% ( 1) 00:14:05.312 6.053 - 6.080: 99.5381% ( 1) 00:14:05.312 9.227 - 9.280: 99.5432% ( 1) 00:14:05.312 9.280 - 9.333: 99.5482% ( 1) 00:14:05.312 10.453 - 10.507: 99.5533% ( 1) 00:14:05.312 3017.387 - 3031.040: 99.5584% ( 1) 00:14:05.312 3986.773 - 4014.080: 99.9848% ( 84) 00:14:05.312 4014.080 - 4041.387: 99.9949% ( 2) 00:14:05.312 4068.693 - 4096.000: 100.0000% ( 1) 00:14:05.312 00:14:05.312 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:05.312 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:05.312 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:05.312 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:05.312 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:05.571 [ 00:14:05.571 { 00:14:05.571 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.571 "subtype": "Discovery", 00:14:05.571 "listen_addresses": [], 00:14:05.571 "allow_any_host": true, 00:14:05.571 "hosts": [] 00:14:05.571 }, 00:14:05.572 { 00:14:05.572 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:05.572 "subtype": "NVMe", 00:14:05.572 "listen_addresses": [ 00:14:05.572 { 00:14:05.572 "trtype": "VFIOUSER", 00:14:05.572 "adrfam": "IPv4", 00:14:05.572 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:05.572 "trsvcid": "0" 00:14:05.572 } 00:14:05.572 ], 00:14:05.572 "allow_any_host": true, 00:14:05.572 "hosts": [], 00:14:05.572 "serial_number": "SPDK1", 00:14:05.572 "model_number": "SPDK bdev Controller", 00:14:05.572 "max_namespaces": 32, 00:14:05.572 "min_cntlid": 1, 00:14:05.572 "max_cntlid": 65519, 00:14:05.572 "namespaces": [ 00:14:05.572 { 00:14:05.572 "nsid": 1, 00:14:05.572 "bdev_name": "Malloc1", 00:14:05.572 "name": "Malloc1", 00:14:05.572 "nguid": "A3B2132E83DB4969A2DFFB1D75E6B1A7", 00:14:05.572 "uuid": "a3b2132e-83db-4969-a2df-fb1d75e6b1a7" 00:14:05.572 } 00:14:05.572 ] 00:14:05.572 }, 00:14:05.572 { 00:14:05.572 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:05.572 "subtype": "NVMe", 00:14:05.572 "listen_addresses": [ 00:14:05.572 { 00:14:05.572 "trtype": "VFIOUSER", 00:14:05.572 "adrfam": "IPv4", 00:14:05.572 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:05.572 "trsvcid": "0" 00:14:05.572 } 00:14:05.572 ], 00:14:05.572 "allow_any_host": true, 00:14:05.572 "hosts": [], 00:14:05.572 "serial_number": "SPDK2", 00:14:05.572 "model_number": "SPDK bdev Controller", 00:14:05.572 "max_namespaces": 32, 00:14:05.572 "min_cntlid": 1, 00:14:05.572 "max_cntlid": 65519, 00:14:05.572 "namespaces": [ 00:14:05.572 { 00:14:05.572 "nsid": 1, 00:14:05.572 "bdev_name": "Malloc2", 00:14:05.572 "name": "Malloc2", 00:14:05.572 "nguid": "D6DD81190BA54FE1805E4AF52B831EE3", 00:14:05.572 "uuid": "d6dd8119-0ba5-4fe1-805e-4af52b831ee3" 00:14:05.572 } 00:14:05.572 ] 00:14:05.572 } 00:14:05.572 ] 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2971572 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:05.572 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:05.572 [2024-12-06 17:50:53.389549] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:05.831 Malloc3 00:14:05.831 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:05.831 [2024-12-06 17:50:53.569804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:05.831 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:05.831 Asynchronous Event Request test 00:14:05.831 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.831 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:05.831 Registering asynchronous event callbacks... 00:14:05.831 Starting namespace attribute notice tests for all controllers... 00:14:05.831 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:05.831 aer_cb - Changed Namespace 00:14:05.831 Cleaning up... 00:14:06.091 [ 00:14:06.091 { 00:14:06.091 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.091 "subtype": "Discovery", 00:14:06.091 "listen_addresses": [], 00:14:06.091 "allow_any_host": true, 00:14:06.091 "hosts": [] 00:14:06.091 }, 00:14:06.091 { 00:14:06.091 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.091 "subtype": "NVMe", 00:14:06.091 "listen_addresses": [ 00:14:06.091 { 00:14:06.091 "trtype": "VFIOUSER", 00:14:06.091 "adrfam": "IPv4", 00:14:06.091 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.091 "trsvcid": "0" 00:14:06.091 } 00:14:06.091 ], 00:14:06.091 "allow_any_host": true, 00:14:06.091 "hosts": [], 00:14:06.091 "serial_number": "SPDK1", 00:14:06.091 "model_number": "SPDK bdev Controller", 00:14:06.091 "max_namespaces": 32, 00:14:06.091 "min_cntlid": 1, 00:14:06.091 "max_cntlid": 65519, 00:14:06.091 "namespaces": [ 00:14:06.091 { 00:14:06.091 "nsid": 1, 00:14:06.091 "bdev_name": "Malloc1", 00:14:06.091 "name": "Malloc1", 00:14:06.091 "nguid": "A3B2132E83DB4969A2DFFB1D75E6B1A7", 00:14:06.091 "uuid": "a3b2132e-83db-4969-a2df-fb1d75e6b1a7" 00:14:06.091 }, 00:14:06.091 { 00:14:06.091 "nsid": 2, 00:14:06.091 "bdev_name": "Malloc3", 00:14:06.091 "name": "Malloc3", 00:14:06.091 "nguid": "3A31F0D1C5584027A3127D9935A9BCA9", 00:14:06.091 "uuid": "3a31f0d1-c558-4027-a312-7d9935a9bca9" 00:14:06.091 } 00:14:06.091 ] 00:14:06.091 }, 00:14:06.091 { 00:14:06.091 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.091 "subtype": "NVMe", 00:14:06.091 "listen_addresses": [ 00:14:06.091 { 00:14:06.091 "trtype": "VFIOUSER", 00:14:06.091 "adrfam": "IPv4", 00:14:06.091 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.091 "trsvcid": "0" 00:14:06.091 } 00:14:06.091 ], 00:14:06.091 "allow_any_host": true, 00:14:06.091 "hosts": [], 00:14:06.091 "serial_number": "SPDK2", 00:14:06.091 "model_number": "SPDK bdev Controller", 00:14:06.091 "max_namespaces": 32, 00:14:06.091 "min_cntlid": 1, 00:14:06.091 "max_cntlid": 65519, 00:14:06.091 "namespaces": [ 00:14:06.091 { 00:14:06.091 "nsid": 1, 00:14:06.091 "bdev_name": "Malloc2", 00:14:06.091 "name": "Malloc2", 00:14:06.091 "nguid": "D6DD81190BA54FE1805E4AF52B831EE3", 00:14:06.091 "uuid": "d6dd8119-0ba5-4fe1-805e-4af52b831ee3" 00:14:06.091 } 00:14:06.091 ] 00:14:06.091 } 00:14:06.091 ] 00:14:06.091 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2971572 00:14:06.091 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:06.091 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.091 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.091 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:06.091 [2024-12-06 17:50:53.757769] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:14:06.091 [2024-12-06 17:50:53.757799] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971819 ] 00:14:06.091 [2024-12-06 17:50:53.796422] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:06.091 [2024-12-06 17:50:53.801617] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.091 [2024-12-06 17:50:53.801637] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6de3e3c000 00:14:06.091 [2024-12-06 17:50:53.802620] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.803625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.804629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.805639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.806644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.807644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.808651] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.809662] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:06.092 [2024-12-06 17:50:53.810665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:06.092 [2024-12-06 17:50:53.810672] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6de3e31000 00:14:06.092 [2024-12-06 17:50:53.811585] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.092 [2024-12-06 17:50:53.820971] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:06.092 [2024-12-06 17:50:53.820988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:06.092 [2024-12-06 17:50:53.826064] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.092 [2024-12-06 17:50:53.826098] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:06.092 [2024-12-06 17:50:53.826165] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:06.092 [2024-12-06 17:50:53.826176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:06.092 [2024-12-06 17:50:53.826180] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:06.092 [2024-12-06 17:50:53.827069] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:06.092 [2024-12-06 17:50:53.827079] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:06.092 [2024-12-06 17:50:53.827085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:06.092 [2024-12-06 17:50:53.828073] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:06.092 [2024-12-06 17:50:53.828081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:06.092 [2024-12-06 17:50:53.828087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:06.092 [2024-12-06 17:50:53.829084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:06.092 [2024-12-06 17:50:53.829091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:06.092 [2024-12-06 17:50:53.830089] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:06.092 [2024-12-06 17:50:53.830095] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:06.092 [2024-12-06 17:50:53.830102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:06.092 [2024-12-06 17:50:53.830107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:06.092 [2024-12-06 17:50:53.830214] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:06.092 [2024-12-06 17:50:53.830218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:06.092 [2024-12-06 17:50:53.830222] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:06.092 [2024-12-06 17:50:53.831097] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:06.092 [2024-12-06 17:50:53.832105] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:06.092 [2024-12-06 17:50:53.833116] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.092 [2024-12-06 17:50:53.834121] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.092 [2024-12-06 17:50:53.834153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:06.092 [2024-12-06 17:50:53.835129] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:06.092 [2024-12-06 17:50:53.835136] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:06.092 [2024-12-06 17:50:53.835140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:06.092 [2024-12-06 17:50:53.835156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:06.092 [2024-12-06 17:50:53.835163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:06.092 [2024-12-06 17:50:53.835174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.092 [2024-12-06 17:50:53.835178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.092 [2024-12-06 17:50:53.835181] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.092 [2024-12-06 17:50:53.835190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.092 [2024-12-06 17:50:53.844109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:06.092 [2024-12-06 17:50:53.844118] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:06.092 [2024-12-06 17:50:53.844122] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:06.092 [2024-12-06 17:50:53.844125] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:06.092 [2024-12-06 17:50:53.844129] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:06.092 [2024-12-06 17:50:53.844133] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:06.092 [2024-12-06 17:50:53.844136] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:06.092 [2024-12-06 17:50:53.844140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:06.092 [2024-12-06 17:50:53.844145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:06.092 [2024-12-06 17:50:53.844153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:06.092 [2024-12-06 17:50:53.852106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:06.092 [2024-12-06 17:50:53.852117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.092 [2024-12-06 17:50:53.852123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.092 [2024-12-06 17:50:53.852129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.092 [2024-12-06 17:50:53.852135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:06.092 [2024-12-06 17:50:53.852138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:06.092 [2024-12-06 17:50:53.852145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:06.092 [2024-12-06 17:50:53.852152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:06.092 [2024-12-06 17:50:53.860105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:06.092 [2024-12-06 17:50:53.860111] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:06.093 [2024-12-06 17:50:53.860115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.860123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.860128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.860134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.868105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.868154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.868160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.868165] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:06.093 [2024-12-06 17:50:53.868169] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:06.093 [2024-12-06 17:50:53.868171] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.093 [2024-12-06 17:50:53.868176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.876104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.876114] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:06.093 [2024-12-06 17:50:53.876120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.876125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.876130] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.093 [2024-12-06 17:50:53.876133] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.093 [2024-12-06 17:50:53.876136] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.093 [2024-12-06 17:50:53.876140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.884105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.884114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.884120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.884125] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:06.093 [2024-12-06 17:50:53.884128] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.093 [2024-12-06 17:50:53.884131] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.093 [2024-12-06 17:50:53.884135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.892104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.892114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892141] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:06.093 [2024-12-06 17:50:53.892144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:06.093 [2024-12-06 17:50:53.892148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:06.093 [2024-12-06 17:50:53.892161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.900108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.900118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.908104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.908115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:06.093 [2024-12-06 17:50:53.916105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:06.093 [2024-12-06 17:50:53.916116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:06.353 [2024-12-06 17:50:53.924106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:06.353 [2024-12-06 17:50:53.924119] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:06.353 [2024-12-06 17:50:53.924123] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:06.353 [2024-12-06 17:50:53.924125] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:06.353 [2024-12-06 17:50:53.924128] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:06.353 [2024-12-06 17:50:53.924130] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:06.353 [2024-12-06 17:50:53.924135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:06.354 [2024-12-06 17:50:53.924140] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:06.354 [2024-12-06 17:50:53.924143] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:06.354 [2024-12-06 17:50:53.924146] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.354 [2024-12-06 17:50:53.924152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:06.354 [2024-12-06 17:50:53.924157] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:06.354 [2024-12-06 17:50:53.924160] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:06.354 [2024-12-06 17:50:53.924162] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.354 [2024-12-06 17:50:53.924166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:06.354 [2024-12-06 17:50:53.924172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:06.354 [2024-12-06 17:50:53.924175] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:06.354 [2024-12-06 17:50:53.924177] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:06.354 [2024-12-06 17:50:53.924181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:06.354 [2024-12-06 17:50:53.932106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:06.354 [2024-12-06 17:50:53.932118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:06.354 [2024-12-06 17:50:53.932126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:06.354 [2024-12-06 17:50:53.932131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:06.354 ===================================================== 00:14:06.354 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:06.354 ===================================================== 00:14:06.354 Controller Capabilities/Features 00:14:06.354 ================================ 00:14:06.354 Vendor ID: 4e58 00:14:06.354 Subsystem Vendor ID: 4e58 00:14:06.354 Serial Number: SPDK2 00:14:06.354 Model Number: SPDK bdev Controller 00:14:06.354 Firmware Version: 25.01 00:14:06.354 Recommended Arb Burst: 6 00:14:06.354 IEEE OUI Identifier: 8d 6b 50 00:14:06.354 Multi-path I/O 00:14:06.354 May have multiple subsystem ports: Yes 00:14:06.354 May have multiple controllers: Yes 00:14:06.354 Associated with SR-IOV VF: No 00:14:06.354 Max Data Transfer Size: 131072 00:14:06.354 Max Number of Namespaces: 32 00:14:06.354 Max Number of I/O Queues: 127 00:14:06.354 NVMe Specification Version (VS): 1.3 00:14:06.354 NVMe Specification Version (Identify): 1.3 00:14:06.354 Maximum Queue Entries: 256 00:14:06.354 Contiguous Queues Required: Yes 00:14:06.354 Arbitration Mechanisms Supported 00:14:06.354 Weighted Round Robin: Not Supported 00:14:06.354 Vendor Specific: Not Supported 00:14:06.354 Reset Timeout: 15000 ms 00:14:06.354 Doorbell Stride: 4 bytes 00:14:06.354 NVM Subsystem Reset: Not Supported 00:14:06.354 Command Sets Supported 00:14:06.354 NVM Command Set: Supported 00:14:06.354 Boot Partition: Not Supported 00:14:06.354 Memory Page Size Minimum: 4096 bytes 00:14:06.354 Memory Page Size Maximum: 4096 bytes 00:14:06.354 Persistent Memory Region: Not Supported 00:14:06.354 Optional Asynchronous Events Supported 00:14:06.354 Namespace Attribute Notices: Supported 00:14:06.354 Firmware Activation Notices: Not Supported 00:14:06.354 ANA Change Notices: Not Supported 00:14:06.354 PLE Aggregate Log Change Notices: Not Supported 00:14:06.354 LBA Status Info Alert Notices: Not Supported 00:14:06.354 EGE Aggregate Log Change Notices: Not Supported 00:14:06.354 Normal NVM Subsystem Shutdown event: Not Supported 00:14:06.354 Zone Descriptor Change Notices: Not Supported 00:14:06.354 Discovery Log Change Notices: Not Supported 00:14:06.354 Controller Attributes 00:14:06.354 128-bit Host Identifier: Supported 00:14:06.354 Non-Operational Permissive Mode: Not Supported 00:14:06.354 NVM Sets: Not Supported 00:14:06.354 Read Recovery Levels: Not Supported 00:14:06.354 Endurance Groups: Not Supported 00:14:06.354 Predictable Latency Mode: Not Supported 00:14:06.354 Traffic Based Keep ALive: Not Supported 00:14:06.354 Namespace Granularity: Not Supported 00:14:06.354 SQ Associations: Not Supported 00:14:06.354 UUID List: Not Supported 00:14:06.354 Multi-Domain Subsystem: Not Supported 00:14:06.354 Fixed Capacity Management: Not Supported 00:14:06.354 Variable Capacity Management: Not Supported 00:14:06.354 Delete Endurance Group: Not Supported 00:14:06.354 Delete NVM Set: Not Supported 00:14:06.354 Extended LBA Formats Supported: Not Supported 00:14:06.354 Flexible Data Placement Supported: Not Supported 00:14:06.354 00:14:06.354 Controller Memory Buffer Support 00:14:06.354 ================================ 00:14:06.354 Supported: No 00:14:06.354 00:14:06.354 Persistent Memory Region Support 00:14:06.354 ================================ 00:14:06.354 Supported: No 00:14:06.354 00:14:06.354 Admin Command Set Attributes 00:14:06.354 ============================ 00:14:06.354 Security Send/Receive: Not Supported 00:14:06.354 Format NVM: Not Supported 00:14:06.354 Firmware Activate/Download: Not Supported 00:14:06.354 Namespace Management: Not Supported 00:14:06.354 Device Self-Test: Not Supported 00:14:06.354 Directives: Not Supported 00:14:06.354 NVMe-MI: Not Supported 00:14:06.354 Virtualization Management: Not Supported 00:14:06.354 Doorbell Buffer Config: Not Supported 00:14:06.354 Get LBA Status Capability: Not Supported 00:14:06.354 Command & Feature Lockdown Capability: Not Supported 00:14:06.354 Abort Command Limit: 4 00:14:06.354 Async Event Request Limit: 4 00:14:06.354 Number of Firmware Slots: N/A 00:14:06.354 Firmware Slot 1 Read-Only: N/A 00:14:06.354 Firmware Activation Without Reset: N/A 00:14:06.354 Multiple Update Detection Support: N/A 00:14:06.354 Firmware Update Granularity: No Information Provided 00:14:06.354 Per-Namespace SMART Log: No 00:14:06.354 Asymmetric Namespace Access Log Page: Not Supported 00:14:06.354 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:06.354 Command Effects Log Page: Supported 00:14:06.354 Get Log Page Extended Data: Supported 00:14:06.354 Telemetry Log Pages: Not Supported 00:14:06.354 Persistent Event Log Pages: Not Supported 00:14:06.354 Supported Log Pages Log Page: May Support 00:14:06.354 Commands Supported & Effects Log Page: Not Supported 00:14:06.354 Feature Identifiers & Effects Log Page:May Support 00:14:06.354 NVMe-MI Commands & Effects Log Page: May Support 00:14:06.354 Data Area 4 for Telemetry Log: Not Supported 00:14:06.354 Error Log Page Entries Supported: 128 00:14:06.354 Keep Alive: Supported 00:14:06.354 Keep Alive Granularity: 10000 ms 00:14:06.354 00:14:06.354 NVM Command Set Attributes 00:14:06.354 ========================== 00:14:06.354 Submission Queue Entry Size 00:14:06.354 Max: 64 00:14:06.354 Min: 64 00:14:06.354 Completion Queue Entry Size 00:14:06.354 Max: 16 00:14:06.354 Min: 16 00:14:06.354 Number of Namespaces: 32 00:14:06.354 Compare Command: Supported 00:14:06.354 Write Uncorrectable Command: Not Supported 00:14:06.354 Dataset Management Command: Supported 00:14:06.354 Write Zeroes Command: Supported 00:14:06.354 Set Features Save Field: Not Supported 00:14:06.354 Reservations: Not Supported 00:14:06.354 Timestamp: Not Supported 00:14:06.354 Copy: Supported 00:14:06.354 Volatile Write Cache: Present 00:14:06.354 Atomic Write Unit (Normal): 1 00:14:06.354 Atomic Write Unit (PFail): 1 00:14:06.354 Atomic Compare & Write Unit: 1 00:14:06.354 Fused Compare & Write: Supported 00:14:06.354 Scatter-Gather List 00:14:06.354 SGL Command Set: Supported (Dword aligned) 00:14:06.354 SGL Keyed: Not Supported 00:14:06.354 SGL Bit Bucket Descriptor: Not Supported 00:14:06.354 SGL Metadata Pointer: Not Supported 00:14:06.354 Oversized SGL: Not Supported 00:14:06.354 SGL Metadata Address: Not Supported 00:14:06.354 SGL Offset: Not Supported 00:14:06.354 Transport SGL Data Block: Not Supported 00:14:06.354 Replay Protected Memory Block: Not Supported 00:14:06.354 00:14:06.354 Firmware Slot Information 00:14:06.354 ========================= 00:14:06.354 Active slot: 1 00:14:06.354 Slot 1 Firmware Revision: 25.01 00:14:06.354 00:14:06.354 00:14:06.354 Commands Supported and Effects 00:14:06.354 ============================== 00:14:06.354 Admin Commands 00:14:06.354 -------------- 00:14:06.354 Get Log Page (02h): Supported 00:14:06.354 Identify (06h): Supported 00:14:06.354 Abort (08h): Supported 00:14:06.354 Set Features (09h): Supported 00:14:06.354 Get Features (0Ah): Supported 00:14:06.354 Asynchronous Event Request (0Ch): Supported 00:14:06.354 Keep Alive (18h): Supported 00:14:06.355 I/O Commands 00:14:06.355 ------------ 00:14:06.355 Flush (00h): Supported LBA-Change 00:14:06.355 Write (01h): Supported LBA-Change 00:14:06.355 Read (02h): Supported 00:14:06.355 Compare (05h): Supported 00:14:06.355 Write Zeroes (08h): Supported LBA-Change 00:14:06.355 Dataset Management (09h): Supported LBA-Change 00:14:06.355 Copy (19h): Supported LBA-Change 00:14:06.355 00:14:06.355 Error Log 00:14:06.355 ========= 00:14:06.355 00:14:06.355 Arbitration 00:14:06.355 =========== 00:14:06.355 Arbitration Burst: 1 00:14:06.355 00:14:06.355 Power Management 00:14:06.355 ================ 00:14:06.355 Number of Power States: 1 00:14:06.355 Current Power State: Power State #0 00:14:06.355 Power State #0: 00:14:06.355 Max Power: 0.00 W 00:14:06.355 Non-Operational State: Operational 00:14:06.355 Entry Latency: Not Reported 00:14:06.355 Exit Latency: Not Reported 00:14:06.355 Relative Read Throughput: 0 00:14:06.355 Relative Read Latency: 0 00:14:06.355 Relative Write Throughput: 0 00:14:06.355 Relative Write Latency: 0 00:14:06.355 Idle Power: Not Reported 00:14:06.355 Active Power: Not Reported 00:14:06.355 Non-Operational Permissive Mode: Not Supported 00:14:06.355 00:14:06.355 Health Information 00:14:06.355 ================== 00:14:06.355 Critical Warnings: 00:14:06.355 Available Spare Space: OK 00:14:06.355 Temperature: OK 00:14:06.355 Device Reliability: OK 00:14:06.355 Read Only: No 00:14:06.355 Volatile Memory Backup: OK 00:14:06.355 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:06.355 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:06.355 Available Spare: 0% 00:14:06.355 Available Sp[2024-12-06 17:50:53.932207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:06.355 [2024-12-06 17:50:53.940104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:06.355 [2024-12-06 17:50:53.940130] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:06.355 [2024-12-06 17:50:53.940136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.355 [2024-12-06 17:50:53.940141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.355 [2024-12-06 17:50:53.940145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.355 [2024-12-06 17:50:53.940150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:06.355 [2024-12-06 17:50:53.940179] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:06.355 [2024-12-06 17:50:53.940186] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:06.355 [2024-12-06 17:50:53.941190] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.355 [2024-12-06 17:50:53.941226] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:06.355 [2024-12-06 17:50:53.941231] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:06.355 [2024-12-06 17:50:53.942190] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:06.355 [2024-12-06 17:50:53.942198] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:06.355 [2024-12-06 17:50:53.942244] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:06.355 [2024-12-06 17:50:53.943207] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:06.355 are Threshold: 0% 00:14:06.355 Life Percentage Used: 0% 00:14:06.355 Data Units Read: 0 00:14:06.355 Data Units Written: 0 00:14:06.355 Host Read Commands: 0 00:14:06.355 Host Write Commands: 0 00:14:06.355 Controller Busy Time: 0 minutes 00:14:06.355 Power Cycles: 0 00:14:06.355 Power On Hours: 0 hours 00:14:06.355 Unsafe Shutdowns: 0 00:14:06.355 Unrecoverable Media Errors: 0 00:14:06.355 Lifetime Error Log Entries: 0 00:14:06.355 Warning Temperature Time: 0 minutes 00:14:06.355 Critical Temperature Time: 0 minutes 00:14:06.355 00:14:06.355 Number of Queues 00:14:06.355 ================ 00:14:06.355 Number of I/O Submission Queues: 127 00:14:06.355 Number of I/O Completion Queues: 127 00:14:06.355 00:14:06.355 Active Namespaces 00:14:06.355 ================= 00:14:06.355 Namespace ID:1 00:14:06.355 Error Recovery Timeout: Unlimited 00:14:06.355 Command Set Identifier: NVM (00h) 00:14:06.355 Deallocate: Supported 00:14:06.355 Deallocated/Unwritten Error: Not Supported 00:14:06.355 Deallocated Read Value: Unknown 00:14:06.355 Deallocate in Write Zeroes: Not Supported 00:14:06.355 Deallocated Guard Field: 0xFFFF 00:14:06.355 Flush: Supported 00:14:06.355 Reservation: Supported 00:14:06.355 Namespace Sharing Capabilities: Multiple Controllers 00:14:06.355 Size (in LBAs): 131072 (0GiB) 00:14:06.355 Capacity (in LBAs): 131072 (0GiB) 00:14:06.355 Utilization (in LBAs): 131072 (0GiB) 00:14:06.355 NGUID: D6DD81190BA54FE1805E4AF52B831EE3 00:14:06.355 UUID: d6dd8119-0ba5-4fe1-805e-4af52b831ee3 00:14:06.355 Thin Provisioning: Not Supported 00:14:06.355 Per-NS Atomic Units: Yes 00:14:06.355 Atomic Boundary Size (Normal): 0 00:14:06.355 Atomic Boundary Size (PFail): 0 00:14:06.355 Atomic Boundary Offset: 0 00:14:06.355 Maximum Single Source Range Length: 65535 00:14:06.355 Maximum Copy Length: 65535 00:14:06.355 Maximum Source Range Count: 1 00:14:06.355 NGUID/EUI64 Never Reused: No 00:14:06.355 Namespace Write Protected: No 00:14:06.355 Number of LBA Formats: 1 00:14:06.355 Current LBA Format: LBA Format #00 00:14:06.355 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:06.355 00:14:06.355 17:50:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:06.355 [2024-12-06 17:50:54.112495] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:11.624 Initializing NVMe Controllers 00:14:11.624 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:11.624 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:11.624 Initialization complete. Launching workers. 00:14:11.624 ======================================================== 00:14:11.624 Latency(us) 00:14:11.624 Device Information : IOPS MiB/s Average min max 00:14:11.624 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39971.94 156.14 3202.13 865.15 7763.09 00:14:11.624 ======================================================== 00:14:11.624 Total : 39971.94 156.14 3202.13 865.15 7763.09 00:14:11.624 00:14:11.624 [2024-12-06 17:50:59.216304] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:11.624 17:50:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:11.624 [2024-12-06 17:50:59.387866] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:17.060 Initializing NVMe Controllers 00:14:17.060 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:17.060 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:17.060 Initialization complete. Launching workers. 00:14:17.060 ======================================================== 00:14:17.060 Latency(us) 00:14:17.060 Device Information : IOPS MiB/s Average min max 00:14:17.060 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.80 156.06 3204.02 871.49 10716.82 00:14:17.060 ======================================================== 00:14:17.060 Total : 39950.80 156.06 3204.02 871.49 10716.82 00:14:17.060 00:14:17.060 [2024-12-06 17:51:04.403914] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:17.060 17:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:17.060 [2024-12-06 17:51:04.612138] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:22.335 [2024-12-06 17:51:09.760193] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:22.335 Initializing NVMe Controllers 00:14:22.335 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.335 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:22.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:22.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:22.335 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:22.335 Initialization complete. Launching workers. 00:14:22.335 Starting thread on core 2 00:14:22.335 Starting thread on core 3 00:14:22.335 Starting thread on core 1 00:14:22.335 17:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:22.335 [2024-12-06 17:51:10.007500] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:25.624 [2024-12-06 17:51:13.089393] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:25.624 Initializing NVMe Controllers 00:14:25.624 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:25.624 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:25.624 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:25.624 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:25.624 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:25.624 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:25.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:25.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:25.624 Initialization complete. Launching workers. 00:14:25.624 Starting thread on core 1 with urgent priority queue 00:14:25.624 Starting thread on core 2 with urgent priority queue 00:14:25.624 Starting thread on core 3 with urgent priority queue 00:14:25.624 Starting thread on core 0 with urgent priority queue 00:14:25.624 SPDK bdev Controller (SPDK2 ) core 0: 14460.00 IO/s 6.92 secs/100000 ios 00:14:25.624 SPDK bdev Controller (SPDK2 ) core 1: 10185.33 IO/s 9.82 secs/100000 ios 00:14:25.624 SPDK bdev Controller (SPDK2 ) core 2: 9942.67 IO/s 10.06 secs/100000 ios 00:14:25.624 SPDK bdev Controller (SPDK2 ) core 3: 11084.00 IO/s 9.02 secs/100000 ios 00:14:25.624 ======================================================== 00:14:25.624 00:14:25.624 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:25.624 [2024-12-06 17:51:13.325589] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:25.624 Initializing NVMe Controllers 00:14:25.624 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:25.624 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:25.624 Namespace ID: 1 size: 0GB 00:14:25.624 Initialization complete. 00:14:25.624 INFO: using host memory buffer for IO 00:14:25.624 Hello world! 00:14:25.624 [2024-12-06 17:51:13.335649] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:25.624 17:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:25.883 [2024-12-06 17:51:13.560384] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.257 Initializing NVMe Controllers 00:14:27.257 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.257 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.257 Initialization complete. Launching workers. 00:14:27.257 submit (in ns) avg, min, max = 5581.5, 2815.0, 3997679.2 00:14:27.257 complete (in ns) avg, min, max = 16481.1, 1636.7, 5991641.7 00:14:27.257 00:14:27.257 Submit histogram 00:14:27.257 ================ 00:14:27.257 Range in us Cumulative Count 00:14:27.257 2.813 - 2.827: 0.2915% ( 58) 00:14:27.257 2.827 - 2.840: 1.1260% ( 166) 00:14:27.257 2.840 - 2.853: 3.0562% ( 384) 00:14:27.257 2.853 - 2.867: 6.1526% ( 616) 00:14:27.257 2.867 - 2.880: 10.0231% ( 770) 00:14:27.257 2.880 - 2.893: 13.8987% ( 771) 00:14:27.257 2.893 - 2.907: 19.1465% ( 1044) 00:14:27.257 2.907 - 2.920: 24.9020% ( 1145) 00:14:27.257 2.920 - 2.933: 31.0445% ( 1222) 00:14:27.257 2.933 - 2.947: 37.2725% ( 1239) 00:14:27.257 2.947 - 2.960: 44.2093% ( 1380) 00:14:27.257 2.960 - 2.973: 51.3421% ( 1419) 00:14:27.257 2.973 - 2.987: 59.6059% ( 1644) 00:14:27.257 2.987 - 3.000: 68.0808% ( 1686) 00:14:27.257 3.000 - 3.013: 76.1385% ( 1603) 00:14:27.257 3.013 - 3.027: 83.3467% ( 1434) 00:14:27.257 3.027 - 3.040: 89.3485% ( 1194) 00:14:27.257 3.040 - 3.053: 93.9731% ( 920) 00:14:27.257 3.053 - 3.067: 97.0142% ( 605) 00:14:27.257 3.067 - 3.080: 98.5171% ( 299) 00:14:27.257 3.080 - 3.093: 99.1555% ( 127) 00:14:27.257 3.093 - 3.107: 99.4069% ( 50) 00:14:27.257 3.107 - 3.120: 99.5174% ( 22) 00:14:27.257 3.120 - 3.133: 99.5928% ( 15) 00:14:27.257 3.133 - 3.147: 99.6079% ( 3) 00:14:27.257 3.227 - 3.240: 99.6129% ( 1) 00:14:27.257 3.240 - 3.253: 99.6180% ( 1) 00:14:27.257 3.253 - 3.267: 99.6230% ( 1) 00:14:27.257 3.627 - 3.653: 99.6280% ( 1) 00:14:27.257 3.653 - 3.680: 99.6331% ( 1) 00:14:27.257 3.787 - 3.813: 99.6381% ( 1) 00:14:27.257 3.840 - 3.867: 99.6431% ( 1) 00:14:27.257 4.133 - 4.160: 99.6532% ( 2) 00:14:27.257 4.267 - 4.293: 99.6582% ( 1) 00:14:27.257 4.347 - 4.373: 99.6632% ( 1) 00:14:27.257 4.480 - 4.507: 99.6682% ( 1) 00:14:27.257 4.533 - 4.560: 99.6783% ( 2) 00:14:27.257 4.560 - 4.587: 99.6833% ( 1) 00:14:27.257 4.587 - 4.613: 99.6934% ( 2) 00:14:27.257 4.640 - 4.667: 99.6984% ( 1) 00:14:27.257 4.693 - 4.720: 99.7034% ( 1) 00:14:27.257 4.800 - 4.827: 99.7185% ( 3) 00:14:27.257 4.853 - 4.880: 99.7235% ( 1) 00:14:27.257 4.880 - 4.907: 99.7336% ( 2) 00:14:27.257 4.907 - 4.933: 99.7436% ( 2) 00:14:27.257 5.013 - 5.040: 99.7487% ( 1) 00:14:27.257 5.040 - 5.067: 99.7537% ( 1) 00:14:27.257 5.120 - 5.147: 99.7587% ( 1) 00:14:27.257 5.173 - 5.200: 99.7637% ( 1) 00:14:27.257 5.253 - 5.280: 99.7688% ( 1) 00:14:27.257 5.280 - 5.307: 99.7738% ( 1) 00:14:27.257 5.307 - 5.333: 99.7788% ( 1) 00:14:27.257 5.333 - 5.360: 99.7889% ( 2) 00:14:27.257 5.360 - 5.387: 99.8040% ( 3) 00:14:27.257 5.440 - 5.467: 99.8140% ( 2) 00:14:27.257 5.467 - 5.493: 99.8190% ( 1) 00:14:27.257 5.493 - 5.520: 99.8391% ( 4) 00:14:27.257 5.520 - 5.547: 99.8492% ( 2) 00:14:27.257 5.547 - 5.573: 99.8593% ( 2) 00:14:27.257 5.573 - 5.600: 99.8643% ( 1) 00:14:27.257 5.627 - 5.653: 99.8743% ( 2) 00:14:27.257 5.653 - 5.680: 99.8794% ( 1) 00:14:27.257 5.680 - 5.707: 99.8894% ( 2) 00:14:27.257 5.733 - 5.760: 99.8944% ( 1) 00:14:27.257 5.760 - 5.787: 99.8995% ( 1) 00:14:27.257 5.867 - 5.893: 99.9045% ( 1) 00:14:27.257 5.893 - 5.920: 99.9095% ( 1) 00:14:27.257 5.920 - 5.947: 99.9145% ( 1) 00:14:27.257 6.027 - 6.053: 99.9196% ( 1) 00:14:27.257 6.533 - 6.560: 99.9246% ( 1) 00:14:27.257 6.773 - 6.800: 99.9296% ( 1) 00:14:27.257 7.680 - 7.733: 99.9347% ( 1) 00:14:27.257 3986.773 - 4014.080: 100.0000% ( 13) 00:14:27.257 00:14:27.257 Complete histogram 00:14:27.257 ================== 00:14:27.257 Range in us Cumulative Count 00:14:27.257 1.633 - 1.640: 0.0101% ( 2) 00:14:27.257 1.640 - 1.647: 1.2164% ( 240) 00:14:27.257 1.647 - 1.653: 1.5834% ( 73) 00:14:27.257 1.653 - 1.660: 1.7191% ( 27) 00:14:27.257 1.660 - [2024-12-06 17:51:14.654638] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.257 1.667: 2.0509% ( 66) 00:14:27.257 1.667 - 1.673: 2.1816% ( 26) 00:14:27.257 1.673 - 1.680: 2.2519% ( 14) 00:14:27.257 1.680 - 1.687: 2.2720% ( 4) 00:14:27.257 1.687 - 1.693: 2.2821% ( 2) 00:14:27.257 1.693 - 1.700: 6.7357% ( 886) 00:14:27.257 1.700 - 1.707: 31.9745% ( 5021) 00:14:27.257 1.707 - 1.720: 55.8963% ( 4759) 00:14:27.257 1.720 - 1.733: 77.4957% ( 4297) 00:14:27.257 1.733 - 1.747: 84.1611% ( 1326) 00:14:27.257 1.747 - 1.760: 85.6037% ( 287) 00:14:27.257 1.760 - 1.773: 88.4236% ( 561) 00:14:27.257 1.773 - 1.787: 92.8571% ( 882) 00:14:27.257 1.787 - 1.800: 96.6372% ( 752) 00:14:27.257 1.800 - 1.813: 98.7735% ( 425) 00:14:27.257 1.813 - 1.827: 99.3918% ( 123) 00:14:27.257 1.827 - 1.840: 99.4823% ( 18) 00:14:27.257 1.840 - 1.853: 99.4923% ( 2) 00:14:27.257 1.920 - 1.933: 99.4973% ( 1) 00:14:27.257 3.147 - 3.160: 99.5024% ( 1) 00:14:27.257 3.173 - 3.187: 99.5074% ( 1) 00:14:27.257 3.320 - 3.333: 99.5124% ( 1) 00:14:27.257 3.360 - 3.373: 99.5174% ( 1) 00:14:27.257 3.467 - 3.493: 99.5225% ( 1) 00:14:27.257 3.520 - 3.547: 99.5275% ( 1) 00:14:27.257 3.627 - 3.653: 99.5325% ( 1) 00:14:27.257 3.680 - 3.707: 99.5375% ( 1) 00:14:27.257 3.787 - 3.813: 99.5426% ( 1) 00:14:27.257 3.813 - 3.840: 99.5476% ( 1) 00:14:27.257 3.893 - 3.920: 99.5526% ( 1) 00:14:27.257 3.947 - 3.973: 99.5577% ( 1) 00:14:27.257 3.973 - 4.000: 99.5627% ( 1) 00:14:27.257 4.027 - 4.053: 99.5677% ( 1) 00:14:27.257 4.160 - 4.187: 99.5727% ( 1) 00:14:27.257 4.213 - 4.240: 99.5778% ( 1) 00:14:27.257 4.240 - 4.267: 99.5828% ( 1) 00:14:27.257 4.293 - 4.320: 99.5878% ( 1) 00:14:27.257 4.373 - 4.400: 99.5979% ( 2) 00:14:27.257 4.507 - 4.533: 99.6029% ( 1) 00:14:27.257 4.560 - 4.587: 99.6079% ( 1) 00:14:27.257 4.747 - 4.773: 99.6129% ( 1) 00:14:27.257 6.107 - 6.133: 99.6180% ( 1) 00:14:27.257 7.733 - 7.787: 99.6230% ( 1) 00:14:27.257 7.787 - 7.840: 99.6280% ( 1) 00:14:27.257 34.347 - 34.560: 99.6331% ( 1) 00:14:27.257 2034.347 - 2048.000: 99.6381% ( 1) 00:14:27.257 2908.160 - 2921.813: 99.6431% ( 1) 00:14:27.257 3986.773 - 4014.080: 99.9849% ( 68) 00:14:27.257 4969.813 - 4997.120: 99.9899% ( 1) 00:14:27.257 5980.160 - 6007.467: 100.0000% ( 2) 00:14:27.257 00:14:27.257 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:27.257 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:27.257 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:27.257 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:27.257 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:27.257 [ 00:14:27.257 { 00:14:27.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.258 "subtype": "Discovery", 00:14:27.258 "listen_addresses": [], 00:14:27.258 "allow_any_host": true, 00:14:27.258 "hosts": [] 00:14:27.258 }, 00:14:27.258 { 00:14:27.258 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.258 "subtype": "NVMe", 00:14:27.258 "listen_addresses": [ 00:14:27.258 { 00:14:27.258 "trtype": "VFIOUSER", 00:14:27.258 "adrfam": "IPv4", 00:14:27.258 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.258 "trsvcid": "0" 00:14:27.258 } 00:14:27.258 ], 00:14:27.258 "allow_any_host": true, 00:14:27.258 "hosts": [], 00:14:27.258 "serial_number": "SPDK1", 00:14:27.258 "model_number": "SPDK bdev Controller", 00:14:27.258 "max_namespaces": 32, 00:14:27.258 "min_cntlid": 1, 00:14:27.258 "max_cntlid": 65519, 00:14:27.258 "namespaces": [ 00:14:27.258 { 00:14:27.258 "nsid": 1, 00:14:27.258 "bdev_name": "Malloc1", 00:14:27.258 "name": "Malloc1", 00:14:27.258 "nguid": "A3B2132E83DB4969A2DFFB1D75E6B1A7", 00:14:27.258 "uuid": "a3b2132e-83db-4969-a2df-fb1d75e6b1a7" 00:14:27.258 }, 00:14:27.258 { 00:14:27.258 "nsid": 2, 00:14:27.258 "bdev_name": "Malloc3", 00:14:27.258 "name": "Malloc3", 00:14:27.258 "nguid": "3A31F0D1C5584027A3127D9935A9BCA9", 00:14:27.258 "uuid": "3a31f0d1-c558-4027-a312-7d9935a9bca9" 00:14:27.258 } 00:14:27.258 ] 00:14:27.258 }, 00:14:27.258 { 00:14:27.258 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.258 "subtype": "NVMe", 00:14:27.258 "listen_addresses": [ 00:14:27.258 { 00:14:27.258 "trtype": "VFIOUSER", 00:14:27.258 "adrfam": "IPv4", 00:14:27.258 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.258 "trsvcid": "0" 00:14:27.258 } 00:14:27.258 ], 00:14:27.258 "allow_any_host": true, 00:14:27.258 "hosts": [], 00:14:27.258 "serial_number": "SPDK2", 00:14:27.258 "model_number": "SPDK bdev Controller", 00:14:27.258 "max_namespaces": 32, 00:14:27.258 "min_cntlid": 1, 00:14:27.258 "max_cntlid": 65519, 00:14:27.258 "namespaces": [ 00:14:27.258 { 00:14:27.258 "nsid": 1, 00:14:27.258 "bdev_name": "Malloc2", 00:14:27.258 "name": "Malloc2", 00:14:27.258 "nguid": "D6DD81190BA54FE1805E4AF52B831EE3", 00:14:27.258 "uuid": "d6dd8119-0ba5-4fe1-805e-4af52b831ee3" 00:14:27.258 } 00:14:27.258 ] 00:14:27.258 } 00:14:27.258 ] 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2976942 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:27.258 17:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:27.258 [2024-12-06 17:51:15.008464] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:27.258 Malloc4 00:14:27.258 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:27.516 [2024-12-06 17:51:15.176557] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:27.516 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:27.516 Asynchronous Event Request test 00:14:27.516 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.516 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:27.516 Registering asynchronous event callbacks... 00:14:27.516 Starting namespace attribute notice tests for all controllers... 00:14:27.516 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:27.516 aer_cb - Changed Namespace 00:14:27.516 Cleaning up... 00:14:27.516 [ 00:14:27.516 { 00:14:27.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:27.516 "subtype": "Discovery", 00:14:27.516 "listen_addresses": [], 00:14:27.516 "allow_any_host": true, 00:14:27.516 "hosts": [] 00:14:27.516 }, 00:14:27.516 { 00:14:27.516 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:27.516 "subtype": "NVMe", 00:14:27.516 "listen_addresses": [ 00:14:27.516 { 00:14:27.516 "trtype": "VFIOUSER", 00:14:27.516 "adrfam": "IPv4", 00:14:27.516 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:27.516 "trsvcid": "0" 00:14:27.516 } 00:14:27.516 ], 00:14:27.516 "allow_any_host": true, 00:14:27.516 "hosts": [], 00:14:27.516 "serial_number": "SPDK1", 00:14:27.516 "model_number": "SPDK bdev Controller", 00:14:27.516 "max_namespaces": 32, 00:14:27.516 "min_cntlid": 1, 00:14:27.516 "max_cntlid": 65519, 00:14:27.516 "namespaces": [ 00:14:27.516 { 00:14:27.516 "nsid": 1, 00:14:27.516 "bdev_name": "Malloc1", 00:14:27.516 "name": "Malloc1", 00:14:27.516 "nguid": "A3B2132E83DB4969A2DFFB1D75E6B1A7", 00:14:27.516 "uuid": "a3b2132e-83db-4969-a2df-fb1d75e6b1a7" 00:14:27.516 }, 00:14:27.516 { 00:14:27.516 "nsid": 2, 00:14:27.516 "bdev_name": "Malloc3", 00:14:27.516 "name": "Malloc3", 00:14:27.516 "nguid": "3A31F0D1C5584027A3127D9935A9BCA9", 00:14:27.516 "uuid": "3a31f0d1-c558-4027-a312-7d9935a9bca9" 00:14:27.516 } 00:14:27.516 ] 00:14:27.516 }, 00:14:27.516 { 00:14:27.516 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:27.516 "subtype": "NVMe", 00:14:27.516 "listen_addresses": [ 00:14:27.516 { 00:14:27.516 "trtype": "VFIOUSER", 00:14:27.516 "adrfam": "IPv4", 00:14:27.516 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:27.516 "trsvcid": "0" 00:14:27.516 } 00:14:27.516 ], 00:14:27.516 "allow_any_host": true, 00:14:27.516 "hosts": [], 00:14:27.516 "serial_number": "SPDK2", 00:14:27.516 "model_number": "SPDK bdev Controller", 00:14:27.516 "max_namespaces": 32, 00:14:27.516 "min_cntlid": 1, 00:14:27.516 "max_cntlid": 65519, 00:14:27.516 "namespaces": [ 00:14:27.516 { 00:14:27.516 "nsid": 1, 00:14:27.516 "bdev_name": "Malloc2", 00:14:27.516 "name": "Malloc2", 00:14:27.516 "nguid": "D6DD81190BA54FE1805E4AF52B831EE3", 00:14:27.516 "uuid": "d6dd8119-0ba5-4fe1-805e-4af52b831ee3" 00:14:27.516 }, 00:14:27.516 { 00:14:27.516 "nsid": 2, 00:14:27.516 "bdev_name": "Malloc4", 00:14:27.516 "name": "Malloc4", 00:14:27.516 "nguid": "074EACA1F5A149BA9C430D8A89FD848F", 00:14:27.516 "uuid": "074eaca1-f5a1-49ba-9c43-0d8a89fd848f" 00:14:27.516 } 00:14:27.516 ] 00:14:27.516 } 00:14:27.516 ] 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2976942 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2966528 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2966528 ']' 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2966528 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2966528 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2966528' 00:14:27.774 killing process with pid 2966528 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2966528 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2966528 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2977143 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2977143' 00:14:27.774 Process pid: 2977143 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2977143 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2977143 ']' 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.774 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:27.774 [2024-12-06 17:51:15.579754] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:27.774 [2024-12-06 17:51:15.580683] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:14:27.774 [2024-12-06 17:51:15.580725] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.032 [2024-12-06 17:51:15.646363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.032 [2024-12-06 17:51:15.674834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.032 [2024-12-06 17:51:15.674865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.032 [2024-12-06 17:51:15.674871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.032 [2024-12-06 17:51:15.674876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.032 [2024-12-06 17:51:15.674880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.032 [2024-12-06 17:51:15.676159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.032 [2024-12-06 17:51:15.676252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.032 [2024-12-06 17:51:15.676399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.032 [2024-12-06 17:51:15.676401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.032 [2024-12-06 17:51:15.728780] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:28.032 [2024-12-06 17:51:15.729544] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:28.032 [2024-12-06 17:51:15.729569] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:28.032 [2024-12-06 17:51:15.729684] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:28.032 [2024-12-06 17:51:15.729707] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:28.032 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.032 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:28.032 17:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:29.024 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:29.283 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:29.283 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:29.283 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.283 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:29.283 17:51:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:29.283 Malloc1 00:14:29.283 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:29.540 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:29.798 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:29.798 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.798 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:29.798 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:30.055 Malloc2 00:14:30.055 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:30.313 17:51:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:30.313 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2977143 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2977143 ']' 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2977143 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977143 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977143' 00:14:30.571 killing process with pid 2977143 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2977143 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2977143 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:30.571 00:14:30.571 real 0m48.794s 00:14:30.571 user 3m9.285s 00:14:30.571 sys 0m2.255s 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.571 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.571 ************************************ 00:14:30.571 END TEST nvmf_vfio_user 00:14:30.571 ************************************ 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.829 ************************************ 00:14:30.829 START TEST nvmf_vfio_user_nvme_compliance 00:14:30.829 ************************************ 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:30.829 * Looking for test storage... 00:14:30.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:30.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.829 --rc genhtml_branch_coverage=1 00:14:30.829 --rc genhtml_function_coverage=1 00:14:30.829 --rc genhtml_legend=1 00:14:30.829 --rc geninfo_all_blocks=1 00:14:30.829 --rc geninfo_unexecuted_blocks=1 00:14:30.829 00:14:30.829 ' 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:30.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.829 --rc genhtml_branch_coverage=1 00:14:30.829 --rc genhtml_function_coverage=1 00:14:30.829 --rc genhtml_legend=1 00:14:30.829 --rc geninfo_all_blocks=1 00:14:30.829 --rc geninfo_unexecuted_blocks=1 00:14:30.829 00:14:30.829 ' 00:14:30.829 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:30.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.830 --rc genhtml_branch_coverage=1 00:14:30.830 --rc genhtml_function_coverage=1 00:14:30.830 --rc genhtml_legend=1 00:14:30.830 --rc geninfo_all_blocks=1 00:14:30.830 --rc geninfo_unexecuted_blocks=1 00:14:30.830 00:14:30.830 ' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:30.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.830 --rc genhtml_branch_coverage=1 00:14:30.830 --rc genhtml_function_coverage=1 00:14:30.830 --rc genhtml_legend=1 00:14:30.830 --rc geninfo_all_blocks=1 00:14:30.830 --rc geninfo_unexecuted_blocks=1 00:14:30.830 00:14:30.830 ' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2977889 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2977889' 00:14:30.830 Process pid: 2977889 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2977889 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2977889 ']' 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.830 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:30.830 [2024-12-06 17:51:18.614618] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:14:30.830 [2024-12-06 17:51:18.614671] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.089 [2024-12-06 17:51:18.680335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.089 [2024-12-06 17:51:18.710051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.089 [2024-12-06 17:51:18.710078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.089 [2024-12-06 17:51:18.710086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.089 [2024-12-06 17:51:18.710091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.089 [2024-12-06 17:51:18.710095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.089 [2024-12-06 17:51:18.711243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.089 [2024-12-06 17:51:18.711359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.089 [2024-12-06 17:51:18.711361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.089 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.089 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:31.089 17:51:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.028 malloc0 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.028 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:32.288 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.288 17:51:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:32.288 00:14:32.288 00:14:32.288 CUnit - A unit testing framework for C - Version 2.1-3 00:14:32.288 http://cunit.sourceforge.net/ 00:14:32.288 00:14:32.288 00:14:32.288 Suite: nvme_compliance 00:14:32.288 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 17:51:20.009802] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.288 [2024-12-06 17:51:20.011114] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:32.288 [2024-12-06 17:51:20.011125] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:32.288 [2024-12-06 17:51:20.011130] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:32.288 [2024-12-06 17:51:20.012822] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.288 passed 00:14:32.288 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 17:51:20.088335] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.288 [2024-12-06 17:51:20.091350] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.548 passed 00:14:32.548 Test: admin_identify_ns ...[2024-12-06 17:51:20.167878] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.548 [2024-12-06 17:51:20.226110] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:32.548 [2024-12-06 17:51:20.234108] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:32.548 [2024-12-06 17:51:20.258205] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.548 passed 00:14:32.548 Test: admin_get_features_mandatory_features ...[2024-12-06 17:51:20.329444] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.548 [2024-12-06 17:51:20.332468] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.548 passed 00:14:32.808 Test: admin_get_features_optional_features ...[2024-12-06 17:51:20.408913] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.808 [2024-12-06 17:51:20.411924] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.808 passed 00:14:32.808 Test: admin_set_features_number_of_queues ...[2024-12-06 17:51:20.486660] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.808 [2024-12-06 17:51:20.591226] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.808 passed 00:14:33.069 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 17:51:20.666216] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.069 [2024-12-06 17:51:20.669240] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.069 passed 00:14:33.069 Test: admin_get_log_page_with_lpo ...[2024-12-06 17:51:20.746477] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.069 [2024-12-06 17:51:20.814106] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:33.069 [2024-12-06 17:51:20.827159] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.069 passed 00:14:33.329 Test: fabric_property_get ...[2024-12-06 17:51:20.902346] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.329 [2024-12-06 17:51:20.903537] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:33.329 [2024-12-06 17:51:20.905364] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.329 passed 00:14:33.329 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 17:51:20.979794] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.329 [2024-12-06 17:51:20.980994] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:33.329 [2024-12-06 17:51:20.982809] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.329 passed 00:14:33.329 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 17:51:21.060542] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.329 [2024-12-06 17:51:21.145114] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:33.588 [2024-12-06 17:51:21.161109] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:33.588 [2024-12-06 17:51:21.166178] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.588 passed 00:14:33.588 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 17:51:21.238519] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.588 [2024-12-06 17:51:21.239719] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:33.588 [2024-12-06 17:51:21.241536] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.588 passed 00:14:33.588 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 17:51:21.317230] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.588 [2024-12-06 17:51:21.395109] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:33.849 [2024-12-06 17:51:21.419109] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:33.849 [2024-12-06 17:51:21.424168] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.849 passed 00:14:33.849 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 17:51:21.497375] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.849 [2024-12-06 17:51:21.498579] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:33.849 [2024-12-06 17:51:21.498596] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:33.849 [2024-12-06 17:51:21.500393] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.849 passed 00:14:33.849 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 17:51:21.575459] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.849 [2024-12-06 17:51:21.671113] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:34.109 [2024-12-06 17:51:21.679103] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:34.109 [2024-12-06 17:51:21.687107] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:34.109 [2024-12-06 17:51:21.695113] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:34.109 [2024-12-06 17:51:21.724195] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.109 passed 00:14:34.109 Test: admin_create_io_sq_verify_pc ...[2024-12-06 17:51:21.796495] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:34.109 [2024-12-06 17:51:21.813113] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:34.109 [2024-12-06 17:51:21.830642] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:34.109 passed 00:14:34.109 Test: admin_create_io_qp_max_qps ...[2024-12-06 17:51:21.906106] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.488 [2024-12-06 17:51:23.012108] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:35.748 [2024-12-06 17:51:23.408053] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.748 passed 00:14:35.748 Test: admin_create_io_sq_shared_cq ...[2024-12-06 17:51:23.480826] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:36.008 [2024-12-06 17:51:23.613110] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:36.008 [2024-12-06 17:51:23.650154] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:36.008 passed 00:14:36.008 00:14:36.008 Run Summary: Type Total Ran Passed Failed Inactive 00:14:36.008 suites 1 1 n/a 0 0 00:14:36.008 tests 18 18 18 0 0 00:14:36.008 asserts 360 360 360 0 n/a 00:14:36.008 00:14:36.008 Elapsed time = 1.499 seconds 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2977889 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2977889 ']' 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2977889 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977889 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977889' 00:14:36.008 killing process with pid 2977889 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2977889 00:14:36.008 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2977889 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:36.267 00:14:36.267 real 0m5.417s 00:14:36.267 user 0m15.406s 00:14:36.267 sys 0m0.428s 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:36.267 ************************************ 00:14:36.267 END TEST nvmf_vfio_user_nvme_compliance 00:14:36.267 ************************************ 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.267 ************************************ 00:14:36.267 START TEST nvmf_vfio_user_fuzz 00:14:36.267 ************************************ 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:36.267 * Looking for test storage... 00:14:36.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.267 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:36.268 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:36.268 17:51:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.268 --rc genhtml_branch_coverage=1 00:14:36.268 --rc genhtml_function_coverage=1 00:14:36.268 --rc genhtml_legend=1 00:14:36.268 --rc geninfo_all_blocks=1 00:14:36.268 --rc geninfo_unexecuted_blocks=1 00:14:36.268 00:14:36.268 ' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.268 --rc genhtml_branch_coverage=1 00:14:36.268 --rc genhtml_function_coverage=1 00:14:36.268 --rc genhtml_legend=1 00:14:36.268 --rc geninfo_all_blocks=1 00:14:36.268 --rc geninfo_unexecuted_blocks=1 00:14:36.268 00:14:36.268 ' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.268 --rc genhtml_branch_coverage=1 00:14:36.268 --rc genhtml_function_coverage=1 00:14:36.268 --rc genhtml_legend=1 00:14:36.268 --rc geninfo_all_blocks=1 00:14:36.268 --rc geninfo_unexecuted_blocks=1 00:14:36.268 00:14:36.268 ' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.268 --rc genhtml_branch_coverage=1 00:14:36.268 --rc genhtml_function_coverage=1 00:14:36.268 --rc genhtml_legend=1 00:14:36.268 --rc geninfo_all_blocks=1 00:14:36.268 --rc geninfo_unexecuted_blocks=1 00:14:36.268 00:14:36.268 ' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.268 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2979024 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2979024' 00:14:36.269 Process pid: 2979024 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2979024 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2979024 ']' 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.269 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:36.528 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.528 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:36.528 17:51:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.476 malloc0 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.476 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.736 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.736 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:37.736 17:51:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:09.828 Fuzzing completed. Shutting down the fuzz application 00:15:09.828 00:15:09.828 Dumping successful admin opcodes: 00:15:09.828 9, 10, 00:15:09.828 Dumping successful io opcodes: 00:15:09.828 0, 00:15:09.828 NS: 0x20000081ef00 I/O qp, Total commands completed: 1291397, total successful commands: 5065, random_seed: 380608896 00:15:09.828 NS: 0x20000081ef00 admin qp, Total commands completed: 291312, total successful commands: 69, random_seed: 2027730368 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2979024 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2979024 ']' 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2979024 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2979024 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2979024' 00:15:09.828 killing process with pid 2979024 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2979024 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2979024 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:09.828 00:15:09.828 real 0m31.967s 00:15:09.828 user 0m33.346s 00:15:09.828 sys 0m26.272s 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:09.828 ************************************ 00:15:09.828 END TEST nvmf_vfio_user_fuzz 00:15:09.828 ************************************ 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:09.828 ************************************ 00:15:09.828 START TEST nvmf_auth_target 00:15:09.828 ************************************ 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:09.828 * Looking for test storage... 00:15:09.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:09.828 17:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.828 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.829 --rc genhtml_branch_coverage=1 00:15:09.829 --rc genhtml_function_coverage=1 00:15:09.829 --rc genhtml_legend=1 00:15:09.829 --rc geninfo_all_blocks=1 00:15:09.829 --rc geninfo_unexecuted_blocks=1 00:15:09.829 00:15:09.829 ' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.829 --rc genhtml_branch_coverage=1 00:15:09.829 --rc genhtml_function_coverage=1 00:15:09.829 --rc genhtml_legend=1 00:15:09.829 --rc geninfo_all_blocks=1 00:15:09.829 --rc geninfo_unexecuted_blocks=1 00:15:09.829 00:15:09.829 ' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.829 --rc genhtml_branch_coverage=1 00:15:09.829 --rc genhtml_function_coverage=1 00:15:09.829 --rc genhtml_legend=1 00:15:09.829 --rc geninfo_all_blocks=1 00:15:09.829 --rc geninfo_unexecuted_blocks=1 00:15:09.829 00:15:09.829 ' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.829 --rc genhtml_branch_coverage=1 00:15:09.829 --rc genhtml_function_coverage=1 00:15:09.829 --rc genhtml_legend=1 00:15:09.829 --rc geninfo_all_blocks=1 00:15:09.829 --rc geninfo_unexecuted_blocks=1 00:15:09.829 00:15:09.829 ' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:09.829 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:09.830 17:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:14.025 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:14.025 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.025 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:14.026 Found net devices under 0000:31:00.0: cvl_0_0 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:14.026 Found net devices under 0000:31:00.1: cvl_0_1 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:14.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:15:14.026 00:15:14.026 --- 10.0.0.2 ping statistics --- 00:15:14.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.026 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:15:14.026 00:15:14.026 --- 10.0.0.1 ping statistics --- 00:15:14.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.026 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2989899 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2989899 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2989899 ']' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.026 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2989929 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f30b4c248b0157ffef4edbadca952a033137960cfc9ba499 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Tps 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f30b4c248b0157ffef4edbadca952a033137960cfc9ba499 0 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f30b4c248b0157ffef4edbadca952a033137960cfc9ba499 0 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f30b4c248b0157ffef4edbadca952a033137960cfc9ba499 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Tps 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Tps 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Tps 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f07a4cfd1a73716f9b5a1668bcc2f5a28491420c2397e2c18a9d8094e1acd051 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.D3B 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f07a4cfd1a73716f9b5a1668bcc2f5a28491420c2397e2c18a9d8094e1acd051 3 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f07a4cfd1a73716f9b5a1668bcc2f5a28491420c2397e2c18a9d8094e1acd051 3 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f07a4cfd1a73716f9b5a1668bcc2f5a28491420c2397e2c18a9d8094e1acd051 00:15:14.286 17:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.D3B 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.D3B 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.D3B 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e75a117bcb8df7a6bbd96ed460dd14d8 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QAI 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e75a117bcb8df7a6bbd96ed460dd14d8 1 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e75a117bcb8df7a6bbd96ed460dd14d8 1 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e75a117bcb8df7a6bbd96ed460dd14d8 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QAI 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QAI 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.QAI 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2b9481891af44f199013b4595f137a3cd8552045dac153c2 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:14.286 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.J9M 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2b9481891af44f199013b4595f137a3cd8552045dac153c2 2 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2b9481891af44f199013b4595f137a3cd8552045dac153c2 2 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2b9481891af44f199013b4595f137a3cd8552045dac153c2 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:14.287 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.J9M 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.J9M 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.J9M 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=666f4fd80538fc4c4472d653b72b96bea5fa5683e1b8d00b 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bfx 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 666f4fd80538fc4c4472d653b72b96bea5fa5683e1b8d00b 2 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 666f4fd80538fc4c4472d653b72b96bea5fa5683e1b8d00b 2 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=666f4fd80538fc4c4472d653b72b96bea5fa5683e1b8d00b 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bfx 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bfx 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.bfx 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:14.546 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=939de23f899e6533f5146d4f8d07fa20 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MRw 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 939de23f899e6533f5146d4f8d07fa20 1 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 939de23f899e6533f5146d4f8d07fa20 1 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=939de23f899e6533f5146d4f8d07fa20 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MRw 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MRw 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.MRw 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9befe3c0ace52cc9038945ba83579a661df045523539fae0b39a58ee05ee4fad 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.faF 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9befe3c0ace52cc9038945ba83579a661df045523539fae0b39a58ee05ee4fad 3 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9befe3c0ace52cc9038945ba83579a661df045523539fae0b39a58ee05ee4fad 3 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9befe3c0ace52cc9038945ba83579a661df045523539fae0b39a58ee05ee4fad 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.faF 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.faF 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.faF 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2989899 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2989899 ']' 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.547 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2989929 /var/tmp/host.sock 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2989929 ']' 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tps 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Tps 00:15:14.807 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Tps 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.D3B ]] 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D3B 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D3B 00:15:15.067 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D3B 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QAI 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QAI 00:15:15.326 17:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QAI 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.J9M ]] 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J9M 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J9M 00:15:15.326 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J9M 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bfx 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bfx 00:15:15.586 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bfx 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.MRw ]] 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MRw 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MRw 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MRw 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.faF 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.faF 00:15:15.846 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.faF 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.106 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.366 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.366 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.366 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.366 17:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.366 00:15:16.366 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.366 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.366 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.625 { 00:15:16.625 "cntlid": 1, 00:15:16.625 "qid": 0, 00:15:16.625 "state": "enabled", 00:15:16.625 "thread": "nvmf_tgt_poll_group_000", 00:15:16.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:16.625 "listen_address": { 00:15:16.625 "trtype": "TCP", 00:15:16.625 "adrfam": "IPv4", 00:15:16.625 "traddr": "10.0.0.2", 00:15:16.625 "trsvcid": "4420" 00:15:16.625 }, 00:15:16.625 "peer_address": { 00:15:16.625 "trtype": "TCP", 00:15:16.625 "adrfam": "IPv4", 00:15:16.625 "traddr": "10.0.0.1", 00:15:16.625 "trsvcid": "48452" 00:15:16.625 }, 00:15:16.625 "auth": { 00:15:16.625 "state": "completed", 00:15:16.625 "digest": "sha256", 00:15:16.625 "dhgroup": "null" 00:15:16.625 } 00:15:16.625 } 00:15:16.625 ]' 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.625 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.884 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:16.884 17:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.475 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.476 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.742 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.002 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.002 { 00:15:18.002 "cntlid": 3, 00:15:18.002 "qid": 0, 00:15:18.002 "state": "enabled", 00:15:18.002 "thread": "nvmf_tgt_poll_group_000", 00:15:18.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:18.002 "listen_address": { 00:15:18.002 "trtype": "TCP", 00:15:18.002 "adrfam": "IPv4", 00:15:18.002 "traddr": "10.0.0.2", 00:15:18.002 "trsvcid": "4420" 00:15:18.002 }, 00:15:18.002 "peer_address": { 00:15:18.002 "trtype": "TCP", 00:15:18.002 "adrfam": "IPv4", 00:15:18.002 "traddr": "10.0.0.1", 00:15:18.002 "trsvcid": "48474" 00:15:18.002 }, 00:15:18.002 "auth": { 00:15:18.002 "state": "completed", 00:15:18.002 "digest": "sha256", 00:15:18.002 "dhgroup": "null" 00:15:18.002 } 00:15:18.002 } 00:15:18.002 ]' 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.002 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.262 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:18.262 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.262 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.262 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.262 17:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.262 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:18.262 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.831 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.091 17:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.349 00:15:19.349 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.349 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.349 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.609 { 00:15:19.609 "cntlid": 5, 00:15:19.609 "qid": 0, 00:15:19.609 "state": "enabled", 00:15:19.609 "thread": "nvmf_tgt_poll_group_000", 00:15:19.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:19.609 "listen_address": { 00:15:19.609 "trtype": "TCP", 00:15:19.609 "adrfam": "IPv4", 00:15:19.609 "traddr": "10.0.0.2", 00:15:19.609 "trsvcid": "4420" 00:15:19.609 }, 00:15:19.609 "peer_address": { 00:15:19.609 "trtype": "TCP", 00:15:19.609 "adrfam": "IPv4", 00:15:19.609 "traddr": "10.0.0.1", 00:15:19.609 "trsvcid": "33906" 00:15:19.609 }, 00:15:19.609 "auth": { 00:15:19.609 "state": "completed", 00:15:19.609 "digest": "sha256", 00:15:19.609 "dhgroup": "null" 00:15:19.609 } 00:15:19.609 } 00:15:19.609 ]' 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.609 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.868 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:19.868 17:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.450 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.709 00:15:20.709 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.709 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.709 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.969 { 00:15:20.969 "cntlid": 7, 00:15:20.969 "qid": 0, 00:15:20.969 "state": "enabled", 00:15:20.969 "thread": "nvmf_tgt_poll_group_000", 00:15:20.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:20.969 "listen_address": { 00:15:20.969 "trtype": "TCP", 00:15:20.969 "adrfam": "IPv4", 00:15:20.969 "traddr": "10.0.0.2", 00:15:20.969 "trsvcid": "4420" 00:15:20.969 }, 00:15:20.969 "peer_address": { 00:15:20.969 "trtype": "TCP", 00:15:20.969 "adrfam": "IPv4", 00:15:20.969 "traddr": "10.0.0.1", 00:15:20.969 "trsvcid": "33938" 00:15:20.969 }, 00:15:20.969 "auth": { 00:15:20.969 "state": "completed", 00:15:20.969 "digest": "sha256", 00:15:20.969 "dhgroup": "null" 00:15:20.969 } 00:15:20.969 } 00:15:20.969 ]' 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.969 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.228 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:21.229 17:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.797 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.056 00:15:22.056 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.056 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.056 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.316 { 00:15:22.316 "cntlid": 9, 00:15:22.316 "qid": 0, 00:15:22.316 "state": "enabled", 00:15:22.316 "thread": "nvmf_tgt_poll_group_000", 00:15:22.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:22.316 "listen_address": { 00:15:22.316 "trtype": "TCP", 00:15:22.316 "adrfam": "IPv4", 00:15:22.316 "traddr": "10.0.0.2", 00:15:22.316 "trsvcid": "4420" 00:15:22.316 }, 00:15:22.316 "peer_address": { 00:15:22.316 "trtype": "TCP", 00:15:22.316 "adrfam": "IPv4", 00:15:22.316 "traddr": "10.0.0.1", 00:15:22.316 "trsvcid": "33968" 00:15:22.316 }, 00:15:22.316 "auth": { 00:15:22.316 "state": "completed", 00:15:22.316 "digest": "sha256", 00:15:22.316 "dhgroup": "ffdhe2048" 00:15:22.316 } 00:15:22.316 } 00:15:22.316 ]' 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.316 17:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.316 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.316 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.316 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.316 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.316 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.576 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:22.577 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.147 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.148 17:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.407 00:15:23.407 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.407 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.407 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.667 { 00:15:23.667 "cntlid": 11, 00:15:23.667 "qid": 0, 00:15:23.667 "state": "enabled", 00:15:23.667 "thread": "nvmf_tgt_poll_group_000", 00:15:23.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:23.667 "listen_address": { 00:15:23.667 "trtype": "TCP", 00:15:23.667 "adrfam": "IPv4", 00:15:23.667 "traddr": "10.0.0.2", 00:15:23.667 "trsvcid": "4420" 00:15:23.667 }, 00:15:23.667 "peer_address": { 00:15:23.667 "trtype": "TCP", 00:15:23.667 "adrfam": "IPv4", 00:15:23.667 "traddr": "10.0.0.1", 00:15:23.667 "trsvcid": "34004" 00:15:23.667 }, 00:15:23.667 "auth": { 00:15:23.667 "state": "completed", 00:15:23.667 "digest": "sha256", 00:15:23.667 "dhgroup": "ffdhe2048" 00:15:23.667 } 00:15:23.667 } 00:15:23.667 ]' 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.667 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.926 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:23.927 17:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:24.507 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.765 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.765 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.022 { 00:15:25.022 "cntlid": 13, 00:15:25.022 "qid": 0, 00:15:25.022 "state": "enabled", 00:15:25.022 "thread": "nvmf_tgt_poll_group_000", 00:15:25.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:25.022 "listen_address": { 00:15:25.022 "trtype": "TCP", 00:15:25.022 "adrfam": "IPv4", 00:15:25.022 "traddr": "10.0.0.2", 00:15:25.022 "trsvcid": "4420" 00:15:25.022 }, 00:15:25.022 "peer_address": { 00:15:25.022 "trtype": "TCP", 00:15:25.022 "adrfam": "IPv4", 00:15:25.022 "traddr": "10.0.0.1", 00:15:25.022 "trsvcid": "34040" 00:15:25.022 }, 00:15:25.022 "auth": { 00:15:25.022 "state": "completed", 00:15:25.022 "digest": "sha256", 00:15:25.022 "dhgroup": "ffdhe2048" 00:15:25.022 } 00:15:25.022 } 00:15:25.022 ]' 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.022 17:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.281 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:25.281 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:25.847 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.848 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.105 17:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.364 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.364 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.676 { 00:15:26.676 "cntlid": 15, 00:15:26.676 "qid": 0, 00:15:26.676 "state": "enabled", 00:15:26.676 "thread": "nvmf_tgt_poll_group_000", 00:15:26.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:26.676 "listen_address": { 00:15:26.676 "trtype": "TCP", 00:15:26.676 "adrfam": "IPv4", 00:15:26.676 "traddr": "10.0.0.2", 00:15:26.676 "trsvcid": "4420" 00:15:26.676 }, 00:15:26.676 "peer_address": { 00:15:26.676 "trtype": "TCP", 00:15:26.676 "adrfam": "IPv4", 00:15:26.676 "traddr": "10.0.0.1", 00:15:26.676 "trsvcid": "34058" 00:15:26.676 }, 00:15:26.676 "auth": { 00:15:26.676 "state": "completed", 00:15:26.676 "digest": "sha256", 00:15:26.676 "dhgroup": "ffdhe2048" 00:15:26.676 } 00:15:26.676 } 00:15:26.676 ]' 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:26.676 17:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:27.362 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.363 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.622 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.879 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.879 { 00:15:27.879 "cntlid": 17, 00:15:27.879 "qid": 0, 00:15:27.879 "state": "enabled", 00:15:27.879 "thread": "nvmf_tgt_poll_group_000", 00:15:27.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:27.879 "listen_address": { 00:15:27.879 "trtype": "TCP", 00:15:27.879 "adrfam": "IPv4", 00:15:27.879 "traddr": "10.0.0.2", 00:15:27.879 "trsvcid": "4420" 00:15:27.879 }, 00:15:27.879 "peer_address": { 00:15:27.879 "trtype": "TCP", 00:15:27.879 "adrfam": "IPv4", 00:15:27.879 "traddr": "10.0.0.1", 00:15:27.879 "trsvcid": "34092" 00:15:27.879 }, 00:15:27.879 "auth": { 00:15:27.879 "state": "completed", 00:15:27.879 "digest": "sha256", 00:15:27.879 "dhgroup": "ffdhe3072" 00:15:27.879 } 00:15:27.879 } 00:15:27.879 ]' 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.879 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.137 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.137 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.137 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.137 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:28.137 17:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.704 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.963 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.222 00:15:29.222 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.222 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.222 17:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.480 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.480 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.480 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.481 { 00:15:29.481 "cntlid": 19, 00:15:29.481 "qid": 0, 00:15:29.481 "state": "enabled", 00:15:29.481 "thread": "nvmf_tgt_poll_group_000", 00:15:29.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:29.481 "listen_address": { 00:15:29.481 "trtype": "TCP", 00:15:29.481 "adrfam": "IPv4", 00:15:29.481 "traddr": "10.0.0.2", 00:15:29.481 "trsvcid": "4420" 00:15:29.481 }, 00:15:29.481 "peer_address": { 00:15:29.481 "trtype": "TCP", 00:15:29.481 "adrfam": "IPv4", 00:15:29.481 "traddr": "10.0.0.1", 00:15:29.481 "trsvcid": "41754" 00:15:29.481 }, 00:15:29.481 "auth": { 00:15:29.481 "state": "completed", 00:15:29.481 "digest": "sha256", 00:15:29.481 "dhgroup": "ffdhe3072" 00:15:29.481 } 00:15:29.481 } 00:15:29.481 ]' 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.481 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.740 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:29.740 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.309 17:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.309 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.569 00:15:30.569 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.569 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.569 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.829 { 00:15:30.829 "cntlid": 21, 00:15:30.829 "qid": 0, 00:15:30.829 "state": "enabled", 00:15:30.829 "thread": "nvmf_tgt_poll_group_000", 00:15:30.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:30.829 "listen_address": { 00:15:30.829 "trtype": "TCP", 00:15:30.829 "adrfam": "IPv4", 00:15:30.829 "traddr": "10.0.0.2", 00:15:30.829 "trsvcid": "4420" 00:15:30.829 }, 00:15:30.829 "peer_address": { 00:15:30.829 "trtype": "TCP", 00:15:30.829 "adrfam": "IPv4", 00:15:30.829 "traddr": "10.0.0.1", 00:15:30.829 "trsvcid": "41784" 00:15:30.829 }, 00:15:30.829 "auth": { 00:15:30.829 "state": "completed", 00:15:30.829 "digest": "sha256", 00:15:30.829 "dhgroup": "ffdhe3072" 00:15:30.829 } 00:15:30.829 } 00:15:30.829 ]' 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.829 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.087 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:31.088 17:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.656 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.915 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.176 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.176 { 00:15:32.176 "cntlid": 23, 00:15:32.176 "qid": 0, 00:15:32.176 "state": "enabled", 00:15:32.176 "thread": "nvmf_tgt_poll_group_000", 00:15:32.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:32.176 "listen_address": { 00:15:32.176 "trtype": "TCP", 00:15:32.176 "adrfam": "IPv4", 00:15:32.176 "traddr": "10.0.0.2", 00:15:32.176 "trsvcid": "4420" 00:15:32.176 }, 00:15:32.176 "peer_address": { 00:15:32.176 "trtype": "TCP", 00:15:32.176 "adrfam": "IPv4", 00:15:32.176 "traddr": "10.0.0.1", 00:15:32.176 "trsvcid": "41806" 00:15:32.176 }, 00:15:32.176 "auth": { 00:15:32.176 "state": "completed", 00:15:32.176 "digest": "sha256", 00:15:32.176 "dhgroup": "ffdhe3072" 00:15:32.176 } 00:15:32.176 } 00:15:32.176 ]' 00:15:32.176 17:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:32.436 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.006 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.265 17:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.524 00:15:33.524 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.524 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.524 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.783 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.783 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.783 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.783 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.783 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.783 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.783 { 00:15:33.783 "cntlid": 25, 00:15:33.783 "qid": 0, 00:15:33.783 "state": "enabled", 00:15:33.783 "thread": "nvmf_tgt_poll_group_000", 00:15:33.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:33.783 "listen_address": { 00:15:33.783 "trtype": "TCP", 00:15:33.783 "adrfam": "IPv4", 00:15:33.784 "traddr": "10.0.0.2", 00:15:33.784 "trsvcid": "4420" 00:15:33.784 }, 00:15:33.784 "peer_address": { 00:15:33.784 "trtype": "TCP", 00:15:33.784 "adrfam": "IPv4", 00:15:33.784 "traddr": "10.0.0.1", 00:15:33.784 "trsvcid": "41842" 00:15:33.784 }, 00:15:33.784 "auth": { 00:15:33.784 "state": "completed", 00:15:33.784 "digest": "sha256", 00:15:33.784 "dhgroup": "ffdhe4096" 00:15:33.784 } 00:15:33.784 } 00:15:33.784 ]' 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.784 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.043 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:34.043 17:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.610 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.870 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.870 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.129 { 00:15:35.129 "cntlid": 27, 00:15:35.129 "qid": 0, 00:15:35.129 "state": "enabled", 00:15:35.129 "thread": "nvmf_tgt_poll_group_000", 00:15:35.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:35.129 "listen_address": { 00:15:35.129 "trtype": "TCP", 00:15:35.129 "adrfam": "IPv4", 00:15:35.129 "traddr": "10.0.0.2", 00:15:35.129 "trsvcid": "4420" 00:15:35.129 }, 00:15:35.129 "peer_address": { 00:15:35.129 "trtype": "TCP", 00:15:35.129 "adrfam": "IPv4", 00:15:35.129 "traddr": "10.0.0.1", 00:15:35.129 "trsvcid": "41866" 00:15:35.129 }, 00:15:35.129 "auth": { 00:15:35.129 "state": "completed", 00:15:35.129 "digest": "sha256", 00:15:35.129 "dhgroup": "ffdhe4096" 00:15:35.129 } 00:15:35.129 } 00:15:35.129 ]' 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.129 17:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.388 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:35.388 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:35.955 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.955 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:35.955 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.956 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.956 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.956 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.956 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:35.956 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.215 17:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.474 00:15:36.474 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.474 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.474 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.734 { 00:15:36.734 "cntlid": 29, 00:15:36.734 "qid": 0, 00:15:36.734 "state": "enabled", 00:15:36.734 "thread": "nvmf_tgt_poll_group_000", 00:15:36.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:36.734 "listen_address": { 00:15:36.734 "trtype": "TCP", 00:15:36.734 "adrfam": "IPv4", 00:15:36.734 "traddr": "10.0.0.2", 00:15:36.734 "trsvcid": "4420" 00:15:36.734 }, 00:15:36.734 "peer_address": { 00:15:36.734 "trtype": "TCP", 00:15:36.734 "adrfam": "IPv4", 00:15:36.734 "traddr": "10.0.0.1", 00:15:36.734 "trsvcid": "41894" 00:15:36.734 }, 00:15:36.734 "auth": { 00:15:36.734 "state": "completed", 00:15:36.734 "digest": "sha256", 00:15:36.734 "dhgroup": "ffdhe4096" 00:15:36.734 } 00:15:36.734 } 00:15:36.734 ]' 00:15:36.734 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:36.735 17:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:37.304 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.575 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.837 00:15:37.837 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.837 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.837 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.096 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.096 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.096 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.096 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.096 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.096 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.096 { 00:15:38.096 "cntlid": 31, 00:15:38.096 "qid": 0, 00:15:38.096 "state": "enabled", 00:15:38.096 "thread": "nvmf_tgt_poll_group_000", 00:15:38.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:38.096 "listen_address": { 00:15:38.096 "trtype": "TCP", 00:15:38.096 "adrfam": "IPv4", 00:15:38.096 "traddr": "10.0.0.2", 00:15:38.096 "trsvcid": "4420" 00:15:38.096 }, 00:15:38.097 "peer_address": { 00:15:38.097 "trtype": "TCP", 00:15:38.097 "adrfam": "IPv4", 00:15:38.097 "traddr": "10.0.0.1", 00:15:38.097 "trsvcid": "41924" 00:15:38.097 }, 00:15:38.097 "auth": { 00:15:38.097 "state": "completed", 00:15:38.097 "digest": "sha256", 00:15:38.097 "dhgroup": "ffdhe4096" 00:15:38.097 } 00:15:38.097 } 00:15:38.097 ]' 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.097 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.356 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:38.356 17:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:38.616 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.876 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.137 00:15:39.137 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.137 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.137 17:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.398 { 00:15:39.398 "cntlid": 33, 00:15:39.398 "qid": 0, 00:15:39.398 "state": "enabled", 00:15:39.398 "thread": "nvmf_tgt_poll_group_000", 00:15:39.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:39.398 "listen_address": { 00:15:39.398 "trtype": "TCP", 00:15:39.398 "adrfam": "IPv4", 00:15:39.398 "traddr": "10.0.0.2", 00:15:39.398 "trsvcid": "4420" 00:15:39.398 }, 00:15:39.398 "peer_address": { 00:15:39.398 "trtype": "TCP", 00:15:39.398 "adrfam": "IPv4", 00:15:39.398 "traddr": "10.0.0.1", 00:15:39.398 "trsvcid": "38476" 00:15:39.398 }, 00:15:39.398 "auth": { 00:15:39.398 "state": "completed", 00:15:39.398 "digest": "sha256", 00:15:39.398 "dhgroup": "ffdhe6144" 00:15:39.398 } 00:15:39.398 } 00:15:39.398 ]' 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.398 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.658 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:39.658 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.226 17:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.487 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.747 00:15:40.747 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.747 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.747 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.007 { 00:15:41.007 "cntlid": 35, 00:15:41.007 "qid": 0, 00:15:41.007 "state": "enabled", 00:15:41.007 "thread": "nvmf_tgt_poll_group_000", 00:15:41.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:41.007 "listen_address": { 00:15:41.007 "trtype": "TCP", 00:15:41.007 "adrfam": "IPv4", 00:15:41.007 "traddr": "10.0.0.2", 00:15:41.007 "trsvcid": "4420" 00:15:41.007 }, 00:15:41.007 "peer_address": { 00:15:41.007 "trtype": "TCP", 00:15:41.007 "adrfam": "IPv4", 00:15:41.007 "traddr": "10.0.0.1", 00:15:41.007 "trsvcid": "38512" 00:15:41.007 }, 00:15:41.007 "auth": { 00:15:41.007 "state": "completed", 00:15:41.007 "digest": "sha256", 00:15:41.007 "dhgroup": "ffdhe6144" 00:15:41.007 } 00:15:41.007 } 00:15:41.007 ]' 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.007 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.008 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.008 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.008 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.267 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:41.267 17:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:41.835 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.093 17:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.352 00:15:42.352 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.352 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.352 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.611 { 00:15:42.611 "cntlid": 37, 00:15:42.611 "qid": 0, 00:15:42.611 "state": "enabled", 00:15:42.611 "thread": "nvmf_tgt_poll_group_000", 00:15:42.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:42.611 "listen_address": { 00:15:42.611 "trtype": "TCP", 00:15:42.611 "adrfam": "IPv4", 00:15:42.611 "traddr": "10.0.0.2", 00:15:42.611 "trsvcid": "4420" 00:15:42.611 }, 00:15:42.611 "peer_address": { 00:15:42.611 "trtype": "TCP", 00:15:42.611 "adrfam": "IPv4", 00:15:42.611 "traddr": "10.0.0.1", 00:15:42.611 "trsvcid": "38540" 00:15:42.611 }, 00:15:42.611 "auth": { 00:15:42.611 "state": "completed", 00:15:42.611 "digest": "sha256", 00:15:42.611 "dhgroup": "ffdhe6144" 00:15:42.611 } 00:15:42.611 } 00:15:42.611 ]' 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.611 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.869 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:42.869 17:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.436 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.437 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.004 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.004 { 00:15:44.004 "cntlid": 39, 00:15:44.004 "qid": 0, 00:15:44.004 "state": "enabled", 00:15:44.004 "thread": "nvmf_tgt_poll_group_000", 00:15:44.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:44.004 "listen_address": { 00:15:44.004 "trtype": "TCP", 00:15:44.004 "adrfam": "IPv4", 00:15:44.004 "traddr": "10.0.0.2", 00:15:44.004 "trsvcid": "4420" 00:15:44.004 }, 00:15:44.004 "peer_address": { 00:15:44.004 "trtype": "TCP", 00:15:44.004 "adrfam": "IPv4", 00:15:44.004 "traddr": "10.0.0.1", 00:15:44.004 "trsvcid": "38564" 00:15:44.004 }, 00:15:44.004 "auth": { 00:15:44.004 "state": "completed", 00:15:44.004 "digest": "sha256", 00:15:44.004 "dhgroup": "ffdhe6144" 00:15:44.004 } 00:15:44.004 } 00:15:44.004 ]' 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.004 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.262 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.262 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.262 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.262 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:44.262 17:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.828 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.829 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:44.829 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.091 17:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.657 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.657 { 00:15:45.657 "cntlid": 41, 00:15:45.657 "qid": 0, 00:15:45.657 "state": "enabled", 00:15:45.657 "thread": "nvmf_tgt_poll_group_000", 00:15:45.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:45.657 "listen_address": { 00:15:45.657 "trtype": "TCP", 00:15:45.657 "adrfam": "IPv4", 00:15:45.657 "traddr": "10.0.0.2", 00:15:45.657 "trsvcid": "4420" 00:15:45.657 }, 00:15:45.657 "peer_address": { 00:15:45.657 "trtype": "TCP", 00:15:45.657 "adrfam": "IPv4", 00:15:45.657 "traddr": "10.0.0.1", 00:15:45.657 "trsvcid": "38592" 00:15:45.657 }, 00:15:45.657 "auth": { 00:15:45.657 "state": "completed", 00:15:45.657 "digest": "sha256", 00:15:45.657 "dhgroup": "ffdhe8192" 00:15:45.657 } 00:15:45.657 } 00:15:45.657 ]' 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.657 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.915 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.915 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.915 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.915 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:45.915 17:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.479 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.737 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.305 00:15:47.305 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.305 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.305 17:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.305 { 00:15:47.305 "cntlid": 43, 00:15:47.305 "qid": 0, 00:15:47.305 "state": "enabled", 00:15:47.305 "thread": "nvmf_tgt_poll_group_000", 00:15:47.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:47.305 "listen_address": { 00:15:47.305 "trtype": "TCP", 00:15:47.305 "adrfam": "IPv4", 00:15:47.305 "traddr": "10.0.0.2", 00:15:47.305 "trsvcid": "4420" 00:15:47.305 }, 00:15:47.305 "peer_address": { 00:15:47.305 "trtype": "TCP", 00:15:47.305 "adrfam": "IPv4", 00:15:47.305 "traddr": "10.0.0.1", 00:15:47.305 "trsvcid": "38618" 00:15:47.305 }, 00:15:47.305 "auth": { 00:15:47.305 "state": "completed", 00:15:47.305 "digest": "sha256", 00:15:47.305 "dhgroup": "ffdhe8192" 00:15:47.305 } 00:15:47.305 } 00:15:47.305 ]' 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.305 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:47.563 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:48.130 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.387 17:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:48.387 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:48.387 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.387 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.387 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.387 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.388 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.954 00:15:48.954 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.955 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.955 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.955 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.955 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.955 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.955 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.213 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.213 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.213 { 00:15:49.213 "cntlid": 45, 00:15:49.213 "qid": 0, 00:15:49.213 "state": "enabled", 00:15:49.213 "thread": "nvmf_tgt_poll_group_000", 00:15:49.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:49.213 "listen_address": { 00:15:49.213 "trtype": "TCP", 00:15:49.213 "adrfam": "IPv4", 00:15:49.213 "traddr": "10.0.0.2", 00:15:49.213 "trsvcid": "4420" 00:15:49.213 }, 00:15:49.213 "peer_address": { 00:15:49.213 "trtype": "TCP", 00:15:49.213 "adrfam": "IPv4", 00:15:49.213 "traddr": "10.0.0.1", 00:15:49.213 "trsvcid": "32976" 00:15:49.213 }, 00:15:49.213 "auth": { 00:15:49.213 "state": "completed", 00:15:49.214 "digest": "sha256", 00:15:49.214 "dhgroup": "ffdhe8192" 00:15:49.214 } 00:15:49.214 } 00:15:49.214 ]' 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.214 17:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.214 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:49.214 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.151 17:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.719 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.719 { 00:15:50.719 "cntlid": 47, 00:15:50.719 "qid": 0, 00:15:50.719 "state": "enabled", 00:15:50.719 "thread": "nvmf_tgt_poll_group_000", 00:15:50.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:50.719 "listen_address": { 00:15:50.719 "trtype": "TCP", 00:15:50.719 "adrfam": "IPv4", 00:15:50.719 "traddr": "10.0.0.2", 00:15:50.719 "trsvcid": "4420" 00:15:50.719 }, 00:15:50.719 "peer_address": { 00:15:50.719 "trtype": "TCP", 00:15:50.719 "adrfam": "IPv4", 00:15:50.719 "traddr": "10.0.0.1", 00:15:50.719 "trsvcid": "33004" 00:15:50.719 }, 00:15:50.719 "auth": { 00:15:50.719 "state": "completed", 00:15:50.719 "digest": "sha256", 00:15:50.719 "dhgroup": "ffdhe8192" 00:15:50.719 } 00:15:50.719 } 00:15:50.719 ]' 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.719 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.978 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:50.978 17:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.545 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.804 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.805 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.805 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.805 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.805 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.063 00:15:52.063 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.063 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.063 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.063 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.064 { 00:15:52.064 "cntlid": 49, 00:15:52.064 "qid": 0, 00:15:52.064 "state": "enabled", 00:15:52.064 "thread": "nvmf_tgt_poll_group_000", 00:15:52.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:52.064 "listen_address": { 00:15:52.064 "trtype": "TCP", 00:15:52.064 "adrfam": "IPv4", 00:15:52.064 "traddr": "10.0.0.2", 00:15:52.064 "trsvcid": "4420" 00:15:52.064 }, 00:15:52.064 "peer_address": { 00:15:52.064 "trtype": "TCP", 00:15:52.064 "adrfam": "IPv4", 00:15:52.064 "traddr": "10.0.0.1", 00:15:52.064 "trsvcid": "33024" 00:15:52.064 }, 00:15:52.064 "auth": { 00:15:52.064 "state": "completed", 00:15:52.064 "digest": "sha384", 00:15:52.064 "dhgroup": "null" 00:15:52.064 } 00:15:52.064 } 00:15:52.064 ]' 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.064 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.323 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.323 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.323 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.323 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.323 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.323 17:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.323 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:52.323 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:52.891 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.150 17:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.408 00:15:53.408 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.408 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.408 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.668 { 00:15:53.668 "cntlid": 51, 00:15:53.668 "qid": 0, 00:15:53.668 "state": "enabled", 00:15:53.668 "thread": "nvmf_tgt_poll_group_000", 00:15:53.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:53.668 "listen_address": { 00:15:53.668 "trtype": "TCP", 00:15:53.668 "adrfam": "IPv4", 00:15:53.668 "traddr": "10.0.0.2", 00:15:53.668 "trsvcid": "4420" 00:15:53.668 }, 00:15:53.668 "peer_address": { 00:15:53.668 "trtype": "TCP", 00:15:53.668 "adrfam": "IPv4", 00:15:53.668 "traddr": "10.0.0.1", 00:15:53.668 "trsvcid": "33050" 00:15:53.668 }, 00:15:53.668 "auth": { 00:15:53.668 "state": "completed", 00:15:53.668 "digest": "sha384", 00:15:53.668 "dhgroup": "null" 00:15:53.668 } 00:15:53.668 } 00:15:53.668 ]' 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.668 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.928 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:53.928 17:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.497 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.756 00:15:54.756 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.756 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.756 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.015 { 00:15:55.015 "cntlid": 53, 00:15:55.015 "qid": 0, 00:15:55.015 "state": "enabled", 00:15:55.015 "thread": "nvmf_tgt_poll_group_000", 00:15:55.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:55.015 "listen_address": { 00:15:55.015 "trtype": "TCP", 00:15:55.015 "adrfam": "IPv4", 00:15:55.015 "traddr": "10.0.0.2", 00:15:55.015 "trsvcid": "4420" 00:15:55.015 }, 00:15:55.015 "peer_address": { 00:15:55.015 "trtype": "TCP", 00:15:55.015 "adrfam": "IPv4", 00:15:55.015 "traddr": "10.0.0.1", 00:15:55.015 "trsvcid": "33074" 00:15:55.015 }, 00:15:55.015 "auth": { 00:15:55.015 "state": "completed", 00:15:55.015 "digest": "sha384", 00:15:55.015 "dhgroup": "null" 00:15:55.015 } 00:15:55.015 } 00:15:55.015 ]' 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.015 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.273 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:55.273 17:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:55.843 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.102 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.102 17:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.361 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.361 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.361 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.361 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.361 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.361 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.361 { 00:15:56.361 "cntlid": 55, 00:15:56.361 "qid": 0, 00:15:56.361 "state": "enabled", 00:15:56.361 "thread": "nvmf_tgt_poll_group_000", 00:15:56.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:56.361 "listen_address": { 00:15:56.361 "trtype": "TCP", 00:15:56.361 "adrfam": "IPv4", 00:15:56.361 "traddr": "10.0.0.2", 00:15:56.361 "trsvcid": "4420" 00:15:56.361 }, 00:15:56.361 "peer_address": { 00:15:56.361 "trtype": "TCP", 00:15:56.361 "adrfam": "IPv4", 00:15:56.361 "traddr": "10.0.0.1", 00:15:56.361 "trsvcid": "33100" 00:15:56.361 }, 00:15:56.361 "auth": { 00:15:56.361 "state": "completed", 00:15:56.362 "digest": "sha384", 00:15:56.362 "dhgroup": "null" 00:15:56.362 } 00:15:56.362 } 00:15:56.362 ]' 00:15:56.362 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.362 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.362 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.362 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.362 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.620 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.621 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.621 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.621 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:56.621 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.188 17:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.446 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.704 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.704 { 00:15:57.704 "cntlid": 57, 00:15:57.704 "qid": 0, 00:15:57.704 "state": "enabled", 00:15:57.704 "thread": "nvmf_tgt_poll_group_000", 00:15:57.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:57.704 "listen_address": { 00:15:57.704 "trtype": "TCP", 00:15:57.704 "adrfam": "IPv4", 00:15:57.704 "traddr": "10.0.0.2", 00:15:57.704 "trsvcid": "4420" 00:15:57.704 }, 00:15:57.704 "peer_address": { 00:15:57.704 "trtype": "TCP", 00:15:57.704 "adrfam": "IPv4", 00:15:57.704 "traddr": "10.0.0.1", 00:15:57.704 "trsvcid": "33124" 00:15:57.704 }, 00:15:57.704 "auth": { 00:15:57.704 "state": "completed", 00:15:57.704 "digest": "sha384", 00:15:57.704 "dhgroup": "ffdhe2048" 00:15:57.704 } 00:15:57.704 } 00:15:57.704 ]' 00:15:57.704 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:57.962 17:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:15:58.527 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.786 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.044 00:15:59.044 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.044 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.044 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.302 { 00:15:59.302 "cntlid": 59, 00:15:59.302 "qid": 0, 00:15:59.302 "state": "enabled", 00:15:59.302 "thread": "nvmf_tgt_poll_group_000", 00:15:59.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:59.302 "listen_address": { 00:15:59.302 "trtype": "TCP", 00:15:59.302 "adrfam": "IPv4", 00:15:59.302 "traddr": "10.0.0.2", 00:15:59.302 "trsvcid": "4420" 00:15:59.302 }, 00:15:59.302 "peer_address": { 00:15:59.302 "trtype": "TCP", 00:15:59.302 "adrfam": "IPv4", 00:15:59.302 "traddr": "10.0.0.1", 00:15:59.302 "trsvcid": "40678" 00:15:59.302 }, 00:15:59.302 "auth": { 00:15:59.302 "state": "completed", 00:15:59.302 "digest": "sha384", 00:15:59.302 "dhgroup": "ffdhe2048" 00:15:59.302 } 00:15:59.302 } 00:15:59.302 ]' 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.302 17:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.302 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.302 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.302 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.560 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:15:59.560 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.127 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.386 17:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.386 00:16:00.386 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.386 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.386 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.645 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.645 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.645 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.645 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.646 { 00:16:00.646 "cntlid": 61, 00:16:00.646 "qid": 0, 00:16:00.646 "state": "enabled", 00:16:00.646 "thread": "nvmf_tgt_poll_group_000", 00:16:00.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:00.646 "listen_address": { 00:16:00.646 "trtype": "TCP", 00:16:00.646 "adrfam": "IPv4", 00:16:00.646 "traddr": "10.0.0.2", 00:16:00.646 "trsvcid": "4420" 00:16:00.646 }, 00:16:00.646 "peer_address": { 00:16:00.646 "trtype": "TCP", 00:16:00.646 "adrfam": "IPv4", 00:16:00.646 "traddr": "10.0.0.1", 00:16:00.646 "trsvcid": "40694" 00:16:00.646 }, 00:16:00.646 "auth": { 00:16:00.646 "state": "completed", 00:16:00.646 "digest": "sha384", 00:16:00.646 "dhgroup": "ffdhe2048" 00:16:00.646 } 00:16:00.646 } 00:16:00.646 ]' 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.646 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.904 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.904 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.904 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.904 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:00.904 17:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:01.469 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.726 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.984 00:16:01.984 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.984 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.984 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.984 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.241 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.241 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.242 { 00:16:02.242 "cntlid": 63, 00:16:02.242 "qid": 0, 00:16:02.242 "state": "enabled", 00:16:02.242 "thread": "nvmf_tgt_poll_group_000", 00:16:02.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:02.242 "listen_address": { 00:16:02.242 "trtype": "TCP", 00:16:02.242 "adrfam": "IPv4", 00:16:02.242 "traddr": "10.0.0.2", 00:16:02.242 "trsvcid": "4420" 00:16:02.242 }, 00:16:02.242 "peer_address": { 00:16:02.242 "trtype": "TCP", 00:16:02.242 "adrfam": "IPv4", 00:16:02.242 "traddr": "10.0.0.1", 00:16:02.242 "trsvcid": "40704" 00:16:02.242 }, 00:16:02.242 "auth": { 00:16:02.242 "state": "completed", 00:16:02.242 "digest": "sha384", 00:16:02.242 "dhgroup": "ffdhe2048" 00:16:02.242 } 00:16:02.242 } 00:16:02.242 ]' 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.242 17:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.500 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:02.500 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.067 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.068 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.068 17:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.325 00:16:03.325 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.325 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.325 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.584 { 00:16:03.584 "cntlid": 65, 00:16:03.584 "qid": 0, 00:16:03.584 "state": "enabled", 00:16:03.584 "thread": "nvmf_tgt_poll_group_000", 00:16:03.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:03.584 "listen_address": { 00:16:03.584 "trtype": "TCP", 00:16:03.584 "adrfam": "IPv4", 00:16:03.584 "traddr": "10.0.0.2", 00:16:03.584 "trsvcid": "4420" 00:16:03.584 }, 00:16:03.584 "peer_address": { 00:16:03.584 "trtype": "TCP", 00:16:03.584 "adrfam": "IPv4", 00:16:03.584 "traddr": "10.0.0.1", 00:16:03.584 "trsvcid": "40716" 00:16:03.584 }, 00:16:03.584 "auth": { 00:16:03.584 "state": "completed", 00:16:03.584 "digest": "sha384", 00:16:03.584 "dhgroup": "ffdhe3072" 00:16:03.584 } 00:16:03.584 } 00:16:03.584 ]' 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.584 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.842 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:03.842 17:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.409 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.668 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.933 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.933 { 00:16:04.933 "cntlid": 67, 00:16:04.933 "qid": 0, 00:16:04.933 "state": "enabled", 00:16:04.933 "thread": "nvmf_tgt_poll_group_000", 00:16:04.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:04.933 "listen_address": { 00:16:04.933 "trtype": "TCP", 00:16:04.933 "adrfam": "IPv4", 00:16:04.933 "traddr": "10.0.0.2", 00:16:04.933 "trsvcid": "4420" 00:16:04.933 }, 00:16:04.933 "peer_address": { 00:16:04.933 "trtype": "TCP", 00:16:04.933 "adrfam": "IPv4", 00:16:04.933 "traddr": "10.0.0.1", 00:16:04.933 "trsvcid": "40740" 00:16:04.933 }, 00:16:04.933 "auth": { 00:16:04.933 "state": "completed", 00:16:04.933 "digest": "sha384", 00:16:04.933 "dhgroup": "ffdhe3072" 00:16:04.933 } 00:16:04.933 } 00:16:04.933 ]' 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.933 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.245 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.245 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.245 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.245 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.245 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.245 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:05.246 17:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:05.866 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.125 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.385 00:16:06.385 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.385 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.385 17:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.385 { 00:16:06.385 "cntlid": 69, 00:16:06.385 "qid": 0, 00:16:06.385 "state": "enabled", 00:16:06.385 "thread": "nvmf_tgt_poll_group_000", 00:16:06.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:06.385 "listen_address": { 00:16:06.385 "trtype": "TCP", 00:16:06.385 "adrfam": "IPv4", 00:16:06.385 "traddr": "10.0.0.2", 00:16:06.385 "trsvcid": "4420" 00:16:06.385 }, 00:16:06.385 "peer_address": { 00:16:06.385 "trtype": "TCP", 00:16:06.385 "adrfam": "IPv4", 00:16:06.385 "traddr": "10.0.0.1", 00:16:06.385 "trsvcid": "40770" 00:16:06.385 }, 00:16:06.385 "auth": { 00:16:06.385 "state": "completed", 00:16:06.385 "digest": "sha384", 00:16:06.385 "dhgroup": "ffdhe3072" 00:16:06.385 } 00:16:06.385 } 00:16:06.385 ]' 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.385 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.645 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.645 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.645 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.645 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:06.645 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:07.212 17:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.212 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:07.212 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.212 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.212 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.212 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.212 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:07.213 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.472 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.731 00:16:07.731 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.731 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.731 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.990 { 00:16:07.990 "cntlid": 71, 00:16:07.990 "qid": 0, 00:16:07.990 "state": "enabled", 00:16:07.990 "thread": "nvmf_tgt_poll_group_000", 00:16:07.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:07.990 "listen_address": { 00:16:07.990 "trtype": "TCP", 00:16:07.990 "adrfam": "IPv4", 00:16:07.990 "traddr": "10.0.0.2", 00:16:07.990 "trsvcid": "4420" 00:16:07.990 }, 00:16:07.990 "peer_address": { 00:16:07.990 "trtype": "TCP", 00:16:07.990 "adrfam": "IPv4", 00:16:07.990 "traddr": "10.0.0.1", 00:16:07.990 "trsvcid": "40794" 00:16:07.990 }, 00:16:07.990 "auth": { 00:16:07.990 "state": "completed", 00:16:07.990 "digest": "sha384", 00:16:07.990 "dhgroup": "ffdhe3072" 00:16:07.990 } 00:16:07.990 } 00:16:07.990 ]' 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.990 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.250 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:08.250 17:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.819 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.078 00:16:09.078 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.078 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.078 17:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.337 { 00:16:09.337 "cntlid": 73, 00:16:09.337 "qid": 0, 00:16:09.337 "state": "enabled", 00:16:09.337 "thread": "nvmf_tgt_poll_group_000", 00:16:09.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:09.337 "listen_address": { 00:16:09.337 "trtype": "TCP", 00:16:09.337 "adrfam": "IPv4", 00:16:09.337 "traddr": "10.0.0.2", 00:16:09.337 "trsvcid": "4420" 00:16:09.337 }, 00:16:09.337 "peer_address": { 00:16:09.337 "trtype": "TCP", 00:16:09.337 "adrfam": "IPv4", 00:16:09.337 "traddr": "10.0.0.1", 00:16:09.337 "trsvcid": "52046" 00:16:09.337 }, 00:16:09.337 "auth": { 00:16:09.337 "state": "completed", 00:16:09.337 "digest": "sha384", 00:16:09.337 "dhgroup": "ffdhe4096" 00:16:09.337 } 00:16:09.337 } 00:16:09.337 ]' 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.337 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.338 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.338 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.338 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.338 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.596 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:09.596 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.161 17:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.419 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.420 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.420 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.678 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.678 { 00:16:10.678 "cntlid": 75, 00:16:10.678 "qid": 0, 00:16:10.678 "state": "enabled", 00:16:10.678 "thread": "nvmf_tgt_poll_group_000", 00:16:10.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:10.678 "listen_address": { 00:16:10.678 "trtype": "TCP", 00:16:10.678 "adrfam": "IPv4", 00:16:10.678 "traddr": "10.0.0.2", 00:16:10.678 "trsvcid": "4420" 00:16:10.678 }, 00:16:10.678 "peer_address": { 00:16:10.678 "trtype": "TCP", 00:16:10.678 "adrfam": "IPv4", 00:16:10.678 "traddr": "10.0.0.1", 00:16:10.678 "trsvcid": "52074" 00:16:10.678 }, 00:16:10.678 "auth": { 00:16:10.678 "state": "completed", 00:16:10.678 "digest": "sha384", 00:16:10.678 "dhgroup": "ffdhe4096" 00:16:10.678 } 00:16:10.678 } 00:16:10.678 ]' 00:16:10.678 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:10.938 17:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:11.506 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.506 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:11.507 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.507 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.507 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.507 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.507 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.507 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.766 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.025 00:16:12.025 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.025 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.025 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.284 { 00:16:12.284 "cntlid": 77, 00:16:12.284 "qid": 0, 00:16:12.284 "state": "enabled", 00:16:12.284 "thread": "nvmf_tgt_poll_group_000", 00:16:12.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:12.284 "listen_address": { 00:16:12.284 "trtype": "TCP", 00:16:12.284 "adrfam": "IPv4", 00:16:12.284 "traddr": "10.0.0.2", 00:16:12.284 "trsvcid": "4420" 00:16:12.284 }, 00:16:12.284 "peer_address": { 00:16:12.284 "trtype": "TCP", 00:16:12.284 "adrfam": "IPv4", 00:16:12.284 "traddr": "10.0.0.1", 00:16:12.284 "trsvcid": "52092" 00:16:12.284 }, 00:16:12.284 "auth": { 00:16:12.284 "state": "completed", 00:16:12.284 "digest": "sha384", 00:16:12.284 "dhgroup": "ffdhe4096" 00:16:12.284 } 00:16:12.284 } 00:16:12.284 ]' 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.284 17:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.543 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:12.543 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.112 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.372 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.372 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.372 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.372 17:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.372 00:16:13.372 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.372 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.372 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.631 { 00:16:13.631 "cntlid": 79, 00:16:13.631 "qid": 0, 00:16:13.631 "state": "enabled", 00:16:13.631 "thread": "nvmf_tgt_poll_group_000", 00:16:13.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:13.631 "listen_address": { 00:16:13.631 "trtype": "TCP", 00:16:13.631 "adrfam": "IPv4", 00:16:13.631 "traddr": "10.0.0.2", 00:16:13.631 "trsvcid": "4420" 00:16:13.631 }, 00:16:13.631 "peer_address": { 00:16:13.631 "trtype": "TCP", 00:16:13.631 "adrfam": "IPv4", 00:16:13.631 "traddr": "10.0.0.1", 00:16:13.631 "trsvcid": "52120" 00:16:13.631 }, 00:16:13.631 "auth": { 00:16:13.631 "state": "completed", 00:16:13.631 "digest": "sha384", 00:16:13.631 "dhgroup": "ffdhe4096" 00:16:13.631 } 00:16:13.631 } 00:16:13.631 ]' 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.631 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.890 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:13.890 17:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.457 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.716 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.975 00:16:14.975 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.975 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.975 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.234 { 00:16:15.234 "cntlid": 81, 00:16:15.234 "qid": 0, 00:16:15.234 "state": "enabled", 00:16:15.234 "thread": "nvmf_tgt_poll_group_000", 00:16:15.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:15.234 "listen_address": { 00:16:15.234 "trtype": "TCP", 00:16:15.234 "adrfam": "IPv4", 00:16:15.234 "traddr": "10.0.0.2", 00:16:15.234 "trsvcid": "4420" 00:16:15.234 }, 00:16:15.234 "peer_address": { 00:16:15.234 "trtype": "TCP", 00:16:15.234 "adrfam": "IPv4", 00:16:15.234 "traddr": "10.0.0.1", 00:16:15.234 "trsvcid": "52152" 00:16:15.234 }, 00:16:15.234 "auth": { 00:16:15.234 "state": "completed", 00:16:15.234 "digest": "sha384", 00:16:15.234 "dhgroup": "ffdhe6144" 00:16:15.234 } 00:16:15.234 } 00:16:15.234 ]' 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.234 17:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.493 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:15.493 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.061 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.320 17:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.579 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.579 { 00:16:16.579 "cntlid": 83, 00:16:16.579 "qid": 0, 00:16:16.579 "state": "enabled", 00:16:16.579 "thread": "nvmf_tgt_poll_group_000", 00:16:16.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:16.579 "listen_address": { 00:16:16.579 "trtype": "TCP", 00:16:16.579 "adrfam": "IPv4", 00:16:16.579 "traddr": "10.0.0.2", 00:16:16.579 "trsvcid": "4420" 00:16:16.579 }, 00:16:16.579 "peer_address": { 00:16:16.579 "trtype": "TCP", 00:16:16.579 "adrfam": "IPv4", 00:16:16.579 "traddr": "10.0.0.1", 00:16:16.579 "trsvcid": "52174" 00:16:16.579 }, 00:16:16.579 "auth": { 00:16:16.579 "state": "completed", 00:16:16.579 "digest": "sha384", 00:16:16.579 "dhgroup": "ffdhe6144" 00:16:16.579 } 00:16:16.579 } 00:16:16.579 ]' 00:16:16.579 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:16.839 17:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:17.408 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.666 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.925 00:16:17.925 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.925 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.925 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.184 { 00:16:18.184 "cntlid": 85, 00:16:18.184 "qid": 0, 00:16:18.184 "state": "enabled", 00:16:18.184 "thread": "nvmf_tgt_poll_group_000", 00:16:18.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:18.184 "listen_address": { 00:16:18.184 "trtype": "TCP", 00:16:18.184 "adrfam": "IPv4", 00:16:18.184 "traddr": "10.0.0.2", 00:16:18.184 "trsvcid": "4420" 00:16:18.184 }, 00:16:18.184 "peer_address": { 00:16:18.184 "trtype": "TCP", 00:16:18.184 "adrfam": "IPv4", 00:16:18.184 "traddr": "10.0.0.1", 00:16:18.184 "trsvcid": "52206" 00:16:18.184 }, 00:16:18.184 "auth": { 00:16:18.184 "state": "completed", 00:16:18.184 "digest": "sha384", 00:16:18.184 "dhgroup": "ffdhe6144" 00:16:18.184 } 00:16:18.184 } 00:16:18.184 ]' 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.184 17:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.443 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:18.443 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.012 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.271 17:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:19.529 00:16:19.529 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.529 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.529 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.788 { 00:16:19.788 "cntlid": 87, 00:16:19.788 "qid": 0, 00:16:19.788 "state": "enabled", 00:16:19.788 "thread": "nvmf_tgt_poll_group_000", 00:16:19.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:19.788 "listen_address": { 00:16:19.788 "trtype": "TCP", 00:16:19.788 "adrfam": "IPv4", 00:16:19.788 "traddr": "10.0.0.2", 00:16:19.788 "trsvcid": "4420" 00:16:19.788 }, 00:16:19.788 "peer_address": { 00:16:19.788 "trtype": "TCP", 00:16:19.788 "adrfam": "IPv4", 00:16:19.788 "traddr": "10.0.0.1", 00:16:19.788 "trsvcid": "48852" 00:16:19.788 }, 00:16:19.788 "auth": { 00:16:19.788 "state": "completed", 00:16:19.788 "digest": "sha384", 00:16:19.788 "dhgroup": "ffdhe6144" 00:16:19.788 } 00:16:19.788 } 00:16:19.788 ]' 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.788 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.047 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:20.047 17:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.613 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.178 00:16:21.178 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.178 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.178 17:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.438 { 00:16:21.438 "cntlid": 89, 00:16:21.438 "qid": 0, 00:16:21.438 "state": "enabled", 00:16:21.438 "thread": "nvmf_tgt_poll_group_000", 00:16:21.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:21.438 "listen_address": { 00:16:21.438 "trtype": "TCP", 00:16:21.438 "adrfam": "IPv4", 00:16:21.438 "traddr": "10.0.0.2", 00:16:21.438 "trsvcid": "4420" 00:16:21.438 }, 00:16:21.438 "peer_address": { 00:16:21.438 "trtype": "TCP", 00:16:21.438 "adrfam": "IPv4", 00:16:21.438 "traddr": "10.0.0.1", 00:16:21.438 "trsvcid": "48880" 00:16:21.438 }, 00:16:21.438 "auth": { 00:16:21.438 "state": "completed", 00:16:21.438 "digest": "sha384", 00:16:21.438 "dhgroup": "ffdhe8192" 00:16:21.438 } 00:16:21.438 } 00:16:21.438 ]' 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:21.438 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.373 17:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.373 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.941 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.941 { 00:16:22.941 "cntlid": 91, 00:16:22.941 "qid": 0, 00:16:22.941 "state": "enabled", 00:16:22.941 "thread": "nvmf_tgt_poll_group_000", 00:16:22.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:22.941 "listen_address": { 00:16:22.941 "trtype": "TCP", 00:16:22.941 "adrfam": "IPv4", 00:16:22.941 "traddr": "10.0.0.2", 00:16:22.941 "trsvcid": "4420" 00:16:22.941 }, 00:16:22.941 "peer_address": { 00:16:22.941 "trtype": "TCP", 00:16:22.941 "adrfam": "IPv4", 00:16:22.941 "traddr": "10.0.0.1", 00:16:22.941 "trsvcid": "48906" 00:16:22.941 }, 00:16:22.941 "auth": { 00:16:22.941 "state": "completed", 00:16:22.941 "digest": "sha384", 00:16:22.941 "dhgroup": "ffdhe8192" 00:16:22.941 } 00:16:22.941 } 00:16:22.941 ]' 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.941 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.200 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:23.200 17:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.767 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.027 17:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.595 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.595 { 00:16:24.595 "cntlid": 93, 00:16:24.595 "qid": 0, 00:16:24.595 "state": "enabled", 00:16:24.595 "thread": "nvmf_tgt_poll_group_000", 00:16:24.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:24.595 "listen_address": { 00:16:24.595 "trtype": "TCP", 00:16:24.595 "adrfam": "IPv4", 00:16:24.595 "traddr": "10.0.0.2", 00:16:24.595 "trsvcid": "4420" 00:16:24.595 }, 00:16:24.595 "peer_address": { 00:16:24.595 "trtype": "TCP", 00:16:24.595 "adrfam": "IPv4", 00:16:24.595 "traddr": "10.0.0.1", 00:16:24.595 "trsvcid": "48938" 00:16:24.595 }, 00:16:24.595 "auth": { 00:16:24.595 "state": "completed", 00:16:24.595 "digest": "sha384", 00:16:24.595 "dhgroup": "ffdhe8192" 00:16:24.595 } 00:16:24.595 } 00:16:24.595 ]' 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.595 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.853 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.853 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.853 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.853 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:24.854 17:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.422 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.680 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.247 00:16:26.247 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.247 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.247 17:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.247 { 00:16:26.247 "cntlid": 95, 00:16:26.247 "qid": 0, 00:16:26.247 "state": "enabled", 00:16:26.247 "thread": "nvmf_tgt_poll_group_000", 00:16:26.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:26.247 "listen_address": { 00:16:26.247 "trtype": "TCP", 00:16:26.247 "adrfam": "IPv4", 00:16:26.247 "traddr": "10.0.0.2", 00:16:26.247 "trsvcid": "4420" 00:16:26.247 }, 00:16:26.247 "peer_address": { 00:16:26.247 "trtype": "TCP", 00:16:26.247 "adrfam": "IPv4", 00:16:26.247 "traddr": "10.0.0.1", 00:16:26.247 "trsvcid": "48962" 00:16:26.247 }, 00:16:26.247 "auth": { 00:16:26.247 "state": "completed", 00:16:26.247 "digest": "sha384", 00:16:26.247 "dhgroup": "ffdhe8192" 00:16:26.247 } 00:16:26.247 } 00:16:26.247 ]' 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.247 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.506 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.506 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.506 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.506 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:26.506 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:27.073 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.332 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:27.332 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.332 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.332 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.332 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:27.332 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.333 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.333 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.333 17:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.333 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.590 00:16:27.590 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.590 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.590 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.849 { 00:16:27.849 "cntlid": 97, 00:16:27.849 "qid": 0, 00:16:27.849 "state": "enabled", 00:16:27.849 "thread": "nvmf_tgt_poll_group_000", 00:16:27.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:27.849 "listen_address": { 00:16:27.849 "trtype": "TCP", 00:16:27.849 "adrfam": "IPv4", 00:16:27.849 "traddr": "10.0.0.2", 00:16:27.849 "trsvcid": "4420" 00:16:27.849 }, 00:16:27.849 "peer_address": { 00:16:27.849 "trtype": "TCP", 00:16:27.849 "adrfam": "IPv4", 00:16:27.849 "traddr": "10.0.0.1", 00:16:27.849 "trsvcid": "48994" 00:16:27.849 }, 00:16:27.849 "auth": { 00:16:27.849 "state": "completed", 00:16:27.849 "digest": "sha512", 00:16:27.849 "dhgroup": "null" 00:16:27.849 } 00:16:27.849 } 00:16:27.849 ]' 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.849 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.108 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:28.108 17:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.675 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.934 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.193 00:16:29.193 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.193 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.193 17:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.193 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.193 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.193 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.193 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.453 { 00:16:29.453 "cntlid": 99, 00:16:29.453 "qid": 0, 00:16:29.453 "state": "enabled", 00:16:29.453 "thread": "nvmf_tgt_poll_group_000", 00:16:29.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:29.453 "listen_address": { 00:16:29.453 "trtype": "TCP", 00:16:29.453 "adrfam": "IPv4", 00:16:29.453 "traddr": "10.0.0.2", 00:16:29.453 "trsvcid": "4420" 00:16:29.453 }, 00:16:29.453 "peer_address": { 00:16:29.453 "trtype": "TCP", 00:16:29.453 "adrfam": "IPv4", 00:16:29.453 "traddr": "10.0.0.1", 00:16:29.453 "trsvcid": "51962" 00:16:29.453 }, 00:16:29.453 "auth": { 00:16:29.453 "state": "completed", 00:16:29.453 "digest": "sha512", 00:16:29.453 "dhgroup": "null" 00:16:29.453 } 00:16:29.453 } 00:16:29.453 ]' 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:29.453 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.390 17:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.390 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.650 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.650 { 00:16:30.650 "cntlid": 101, 00:16:30.650 "qid": 0, 00:16:30.650 "state": "enabled", 00:16:30.650 "thread": "nvmf_tgt_poll_group_000", 00:16:30.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:30.650 "listen_address": { 00:16:30.650 "trtype": "TCP", 00:16:30.650 "adrfam": "IPv4", 00:16:30.650 "traddr": "10.0.0.2", 00:16:30.650 "trsvcid": "4420" 00:16:30.650 }, 00:16:30.650 "peer_address": { 00:16:30.650 "trtype": "TCP", 00:16:30.650 "adrfam": "IPv4", 00:16:30.650 "traddr": "10.0.0.1", 00:16:30.650 "trsvcid": "51990" 00:16:30.650 }, 00:16:30.650 "auth": { 00:16:30.650 "state": "completed", 00:16:30.650 "digest": "sha512", 00:16:30.650 "dhgroup": "null" 00:16:30.650 } 00:16:30.650 } 00:16:30.650 ]' 00:16:30.650 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:30.910 17:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.849 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.108 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.108 { 00:16:32.108 "cntlid": 103, 00:16:32.108 "qid": 0, 00:16:32.108 "state": "enabled", 00:16:32.108 "thread": "nvmf_tgt_poll_group_000", 00:16:32.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:32.108 "listen_address": { 00:16:32.108 "trtype": "TCP", 00:16:32.108 "adrfam": "IPv4", 00:16:32.108 "traddr": "10.0.0.2", 00:16:32.108 "trsvcid": "4420" 00:16:32.108 }, 00:16:32.108 "peer_address": { 00:16:32.108 "trtype": "TCP", 00:16:32.108 "adrfam": "IPv4", 00:16:32.108 "traddr": "10.0.0.1", 00:16:32.108 "trsvcid": "52010" 00:16:32.108 }, 00:16:32.108 "auth": { 00:16:32.108 "state": "completed", 00:16:32.108 "digest": "sha512", 00:16:32.108 "dhgroup": "null" 00:16:32.108 } 00:16:32.108 } 00:16:32.108 ]' 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.108 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.367 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.367 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.367 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.367 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.367 17:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.368 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:32.368 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:32.938 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.198 17:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.457 00:16:33.457 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.457 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.457 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.716 { 00:16:33.716 "cntlid": 105, 00:16:33.716 "qid": 0, 00:16:33.716 "state": "enabled", 00:16:33.716 "thread": "nvmf_tgt_poll_group_000", 00:16:33.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:33.716 "listen_address": { 00:16:33.716 "trtype": "TCP", 00:16:33.716 "adrfam": "IPv4", 00:16:33.716 "traddr": "10.0.0.2", 00:16:33.716 "trsvcid": "4420" 00:16:33.716 }, 00:16:33.716 "peer_address": { 00:16:33.716 "trtype": "TCP", 00:16:33.716 "adrfam": "IPv4", 00:16:33.716 "traddr": "10.0.0.1", 00:16:33.716 "trsvcid": "52034" 00:16:33.716 }, 00:16:33.716 "auth": { 00:16:33.716 "state": "completed", 00:16:33.716 "digest": "sha512", 00:16:33.716 "dhgroup": "ffdhe2048" 00:16:33.716 } 00:16:33.716 } 00:16:33.716 ]' 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.716 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.976 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:33.976 17:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.545 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.804 00:16:34.805 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.805 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.805 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.064 { 00:16:35.064 "cntlid": 107, 00:16:35.064 "qid": 0, 00:16:35.064 "state": "enabled", 00:16:35.064 "thread": "nvmf_tgt_poll_group_000", 00:16:35.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:35.064 "listen_address": { 00:16:35.064 "trtype": "TCP", 00:16:35.064 "adrfam": "IPv4", 00:16:35.064 "traddr": "10.0.0.2", 00:16:35.064 "trsvcid": "4420" 00:16:35.064 }, 00:16:35.064 "peer_address": { 00:16:35.064 "trtype": "TCP", 00:16:35.064 "adrfam": "IPv4", 00:16:35.064 "traddr": "10.0.0.1", 00:16:35.064 "trsvcid": "52060" 00:16:35.064 }, 00:16:35.064 "auth": { 00:16:35.064 "state": "completed", 00:16:35.064 "digest": "sha512", 00:16:35.064 "dhgroup": "ffdhe2048" 00:16:35.064 } 00:16:35.064 } 00:16:35.064 ]' 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.064 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.065 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.065 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.065 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.065 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.324 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:35.324 17:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:35.893 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.152 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.152 17:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.412 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.412 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.412 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.412 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.412 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.412 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.412 { 00:16:36.412 "cntlid": 109, 00:16:36.412 "qid": 0, 00:16:36.412 "state": "enabled", 00:16:36.412 "thread": "nvmf_tgt_poll_group_000", 00:16:36.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:36.412 "listen_address": { 00:16:36.412 "trtype": "TCP", 00:16:36.412 "adrfam": "IPv4", 00:16:36.412 "traddr": "10.0.0.2", 00:16:36.412 "trsvcid": "4420" 00:16:36.412 }, 00:16:36.412 "peer_address": { 00:16:36.412 "trtype": "TCP", 00:16:36.412 "adrfam": "IPv4", 00:16:36.412 "traddr": "10.0.0.1", 00:16:36.412 "trsvcid": "52094" 00:16:36.412 }, 00:16:36.412 "auth": { 00:16:36.412 "state": "completed", 00:16:36.412 "digest": "sha512", 00:16:36.412 "dhgroup": "ffdhe2048" 00:16:36.412 } 00:16:36.412 } 00:16:36.412 ]' 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.413 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.673 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:36.673 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:37.240 17:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.240 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.499 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.757 00:16:37.757 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.757 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.758 { 00:16:37.758 "cntlid": 111, 00:16:37.758 "qid": 0, 00:16:37.758 "state": "enabled", 00:16:37.758 "thread": "nvmf_tgt_poll_group_000", 00:16:37.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:37.758 "listen_address": { 00:16:37.758 "trtype": "TCP", 00:16:37.758 "adrfam": "IPv4", 00:16:37.758 "traddr": "10.0.0.2", 00:16:37.758 "trsvcid": "4420" 00:16:37.758 }, 00:16:37.758 "peer_address": { 00:16:37.758 "trtype": "TCP", 00:16:37.758 "adrfam": "IPv4", 00:16:37.758 "traddr": "10.0.0.1", 00:16:37.758 "trsvcid": "52112" 00:16:37.758 }, 00:16:37.758 "auth": { 00:16:37.758 "state": "completed", 00:16:37.758 "digest": "sha512", 00:16:37.758 "dhgroup": "ffdhe2048" 00:16:37.758 } 00:16:37.758 } 00:16:37.758 ]' 00:16:37.758 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:38.016 17:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.953 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.212 00:16:39.212 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.212 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.212 17:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.212 { 00:16:39.212 "cntlid": 113, 00:16:39.212 "qid": 0, 00:16:39.212 "state": "enabled", 00:16:39.212 "thread": "nvmf_tgt_poll_group_000", 00:16:39.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:39.212 "listen_address": { 00:16:39.212 "trtype": "TCP", 00:16:39.212 "adrfam": "IPv4", 00:16:39.212 "traddr": "10.0.0.2", 00:16:39.212 "trsvcid": "4420" 00:16:39.212 }, 00:16:39.212 "peer_address": { 00:16:39.212 "trtype": "TCP", 00:16:39.212 "adrfam": "IPv4", 00:16:39.212 "traddr": "10.0.0.1", 00:16:39.212 "trsvcid": "48518" 00:16:39.212 }, 00:16:39.212 "auth": { 00:16:39.212 "state": "completed", 00:16:39.212 "digest": "sha512", 00:16:39.212 "dhgroup": "ffdhe3072" 00:16:39.212 } 00:16:39.212 } 00:16:39.212 ]' 00:16:39.212 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:39.471 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.404 17:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.404 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.663 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.663 { 00:16:40.663 "cntlid": 115, 00:16:40.663 "qid": 0, 00:16:40.663 "state": "enabled", 00:16:40.663 "thread": "nvmf_tgt_poll_group_000", 00:16:40.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:40.663 "listen_address": { 00:16:40.663 "trtype": "TCP", 00:16:40.663 "adrfam": "IPv4", 00:16:40.663 "traddr": "10.0.0.2", 00:16:40.663 "trsvcid": "4420" 00:16:40.663 }, 00:16:40.663 "peer_address": { 00:16:40.663 "trtype": "TCP", 00:16:40.663 "adrfam": "IPv4", 00:16:40.663 "traddr": "10.0.0.1", 00:16:40.663 "trsvcid": "48532" 00:16:40.663 }, 00:16:40.663 "auth": { 00:16:40.663 "state": "completed", 00:16:40.663 "digest": "sha512", 00:16:40.663 "dhgroup": "ffdhe3072" 00:16:40.663 } 00:16:40.663 } 00:16:40.663 ]' 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.663 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:40.922 17:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:41.489 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.489 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:41.489 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.490 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.747 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.748 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.005 00:16:42.005 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.005 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.005 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.264 { 00:16:42.264 "cntlid": 117, 00:16:42.264 "qid": 0, 00:16:42.264 "state": "enabled", 00:16:42.264 "thread": "nvmf_tgt_poll_group_000", 00:16:42.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:42.264 "listen_address": { 00:16:42.264 "trtype": "TCP", 00:16:42.264 "adrfam": "IPv4", 00:16:42.264 "traddr": "10.0.0.2", 00:16:42.264 "trsvcid": "4420" 00:16:42.264 }, 00:16:42.264 "peer_address": { 00:16:42.264 "trtype": "TCP", 00:16:42.264 "adrfam": "IPv4", 00:16:42.264 "traddr": "10.0.0.1", 00:16:42.264 "trsvcid": "48558" 00:16:42.264 }, 00:16:42.264 "auth": { 00:16:42.264 "state": "completed", 00:16:42.264 "digest": "sha512", 00:16:42.264 "dhgroup": "ffdhe3072" 00:16:42.264 } 00:16:42.264 } 00:16:42.264 ]' 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.264 17:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.523 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:42.523 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.162 17:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:43.443 00:16:43.443 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.443 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.443 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.702 { 00:16:43.702 "cntlid": 119, 00:16:43.702 "qid": 0, 00:16:43.702 "state": "enabled", 00:16:43.702 "thread": "nvmf_tgt_poll_group_000", 00:16:43.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:43.702 "listen_address": { 00:16:43.702 "trtype": "TCP", 00:16:43.702 "adrfam": "IPv4", 00:16:43.702 "traddr": "10.0.0.2", 00:16:43.702 "trsvcid": "4420" 00:16:43.702 }, 00:16:43.702 "peer_address": { 00:16:43.702 "trtype": "TCP", 00:16:43.702 "adrfam": "IPv4", 00:16:43.702 "traddr": "10.0.0.1", 00:16:43.702 "trsvcid": "48578" 00:16:43.702 }, 00:16:43.702 "auth": { 00:16:43.702 "state": "completed", 00:16:43.702 "digest": "sha512", 00:16:43.702 "dhgroup": "ffdhe3072" 00:16:43.702 } 00:16:43.702 } 00:16:43.702 ]' 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.702 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.703 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.703 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.703 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.962 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:43.962 17:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.532 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.791 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.050 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.050 { 00:16:45.050 "cntlid": 121, 00:16:45.050 "qid": 0, 00:16:45.050 "state": "enabled", 00:16:45.050 "thread": "nvmf_tgt_poll_group_000", 00:16:45.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:45.050 "listen_address": { 00:16:45.050 "trtype": "TCP", 00:16:45.050 "adrfam": "IPv4", 00:16:45.050 "traddr": "10.0.0.2", 00:16:45.050 "trsvcid": "4420" 00:16:45.050 }, 00:16:45.050 "peer_address": { 00:16:45.050 "trtype": "TCP", 00:16:45.050 "adrfam": "IPv4", 00:16:45.050 "traddr": "10.0.0.1", 00:16:45.050 "trsvcid": "48590" 00:16:45.050 }, 00:16:45.050 "auth": { 00:16:45.050 "state": "completed", 00:16:45.050 "digest": "sha512", 00:16:45.050 "dhgroup": "ffdhe4096" 00:16:45.050 } 00:16:45.050 } 00:16:45.050 ]' 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.050 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.051 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.310 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.310 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.310 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.310 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.310 17:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.310 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:45.310 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:45.875 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.875 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:45.875 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.875 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.133 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.133 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.133 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.133 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.134 17:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.392 00:16:46.392 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.392 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.392 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.654 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.654 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.654 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.654 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.654 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.654 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.654 { 00:16:46.654 "cntlid": 123, 00:16:46.654 "qid": 0, 00:16:46.655 "state": "enabled", 00:16:46.655 "thread": "nvmf_tgt_poll_group_000", 00:16:46.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:46.655 "listen_address": { 00:16:46.655 "trtype": "TCP", 00:16:46.655 "adrfam": "IPv4", 00:16:46.655 "traddr": "10.0.0.2", 00:16:46.655 "trsvcid": "4420" 00:16:46.655 }, 00:16:46.655 "peer_address": { 00:16:46.655 "trtype": "TCP", 00:16:46.655 "adrfam": "IPv4", 00:16:46.655 "traddr": "10.0.0.1", 00:16:46.655 "trsvcid": "48634" 00:16:46.655 }, 00:16:46.655 "auth": { 00:16:46.655 "state": "completed", 00:16:46.655 "digest": "sha512", 00:16:46.655 "dhgroup": "ffdhe4096" 00:16:46.655 } 00:16:46.655 } 00:16:46.655 ]' 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.655 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.915 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:46.915 17:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.484 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.743 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.002 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.002 { 00:16:48.002 "cntlid": 125, 00:16:48.002 "qid": 0, 00:16:48.002 "state": "enabled", 00:16:48.002 "thread": "nvmf_tgt_poll_group_000", 00:16:48.002 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:48.002 "listen_address": { 00:16:48.002 "trtype": "TCP", 00:16:48.002 "adrfam": "IPv4", 00:16:48.002 "traddr": "10.0.0.2", 00:16:48.002 "trsvcid": "4420" 00:16:48.002 }, 00:16:48.002 "peer_address": { 00:16:48.002 "trtype": "TCP", 00:16:48.002 "adrfam": "IPv4", 00:16:48.002 "traddr": "10.0.0.1", 00:16:48.002 "trsvcid": "48650" 00:16:48.002 }, 00:16:48.002 "auth": { 00:16:48.002 "state": "completed", 00:16:48.002 "digest": "sha512", 00:16:48.002 "dhgroup": "ffdhe4096" 00:16:48.002 } 00:16:48.002 } 00:16:48.002 ]' 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.002 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.261 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.261 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.261 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.261 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.261 17:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.261 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:48.261 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.827 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:49.086 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:49.086 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.086 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.086 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.086 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.087 17:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.347 00:16:49.347 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.347 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.347 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.608 { 00:16:49.608 "cntlid": 127, 00:16:49.608 "qid": 0, 00:16:49.608 "state": "enabled", 00:16:49.608 "thread": "nvmf_tgt_poll_group_000", 00:16:49.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:49.608 "listen_address": { 00:16:49.608 "trtype": "TCP", 00:16:49.608 "adrfam": "IPv4", 00:16:49.608 "traddr": "10.0.0.2", 00:16:49.608 "trsvcid": "4420" 00:16:49.608 }, 00:16:49.608 "peer_address": { 00:16:49.608 "trtype": "TCP", 00:16:49.608 "adrfam": "IPv4", 00:16:49.608 "traddr": "10.0.0.1", 00:16:49.608 "trsvcid": "46542" 00:16:49.608 }, 00:16:49.608 "auth": { 00:16:49.608 "state": "completed", 00:16:49.608 "digest": "sha512", 00:16:49.608 "dhgroup": "ffdhe4096" 00:16:49.608 } 00:16:49.608 } 00:16:49.608 ]' 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.608 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.867 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:49.867 17:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:50.436 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.437 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.697 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.957 00:16:50.957 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.957 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.957 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.217 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.218 { 00:16:51.218 "cntlid": 129, 00:16:51.218 "qid": 0, 00:16:51.218 "state": "enabled", 00:16:51.218 "thread": "nvmf_tgt_poll_group_000", 00:16:51.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:51.218 "listen_address": { 00:16:51.218 "trtype": "TCP", 00:16:51.218 "adrfam": "IPv4", 00:16:51.218 "traddr": "10.0.0.2", 00:16:51.218 "trsvcid": "4420" 00:16:51.218 }, 00:16:51.218 "peer_address": { 00:16:51.218 "trtype": "TCP", 00:16:51.218 "adrfam": "IPv4", 00:16:51.218 "traddr": "10.0.0.1", 00:16:51.218 "trsvcid": "46570" 00:16:51.218 }, 00:16:51.218 "auth": { 00:16:51.218 "state": "completed", 00:16:51.218 "digest": "sha512", 00:16:51.218 "dhgroup": "ffdhe6144" 00:16:51.218 } 00:16:51.218 } 00:16:51.218 ]' 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.218 17:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.478 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:51.478 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.050 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.309 17:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.569 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.569 { 00:16:52.569 "cntlid": 131, 00:16:52.569 "qid": 0, 00:16:52.569 "state": "enabled", 00:16:52.569 "thread": "nvmf_tgt_poll_group_000", 00:16:52.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:52.569 "listen_address": { 00:16:52.569 "trtype": "TCP", 00:16:52.569 "adrfam": "IPv4", 00:16:52.569 "traddr": "10.0.0.2", 00:16:52.569 "trsvcid": "4420" 00:16:52.569 }, 00:16:52.569 "peer_address": { 00:16:52.569 "trtype": "TCP", 00:16:52.569 "adrfam": "IPv4", 00:16:52.569 "traddr": "10.0.0.1", 00:16:52.569 "trsvcid": "46598" 00:16:52.569 }, 00:16:52.569 "auth": { 00:16:52.569 "state": "completed", 00:16:52.569 "digest": "sha512", 00:16:52.569 "dhgroup": "ffdhe6144" 00:16:52.569 } 00:16:52.569 } 00:16:52.569 ]' 00:16:52.569 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:52.828 17:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:53.396 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.656 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.914 00:16:53.915 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.915 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.915 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.172 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.173 { 00:16:54.173 "cntlid": 133, 00:16:54.173 "qid": 0, 00:16:54.173 "state": "enabled", 00:16:54.173 "thread": "nvmf_tgt_poll_group_000", 00:16:54.173 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:54.173 "listen_address": { 00:16:54.173 "trtype": "TCP", 00:16:54.173 "adrfam": "IPv4", 00:16:54.173 "traddr": "10.0.0.2", 00:16:54.173 "trsvcid": "4420" 00:16:54.173 }, 00:16:54.173 "peer_address": { 00:16:54.173 "trtype": "TCP", 00:16:54.173 "adrfam": "IPv4", 00:16:54.173 "traddr": "10.0.0.1", 00:16:54.173 "trsvcid": "46624" 00:16:54.173 }, 00:16:54.173 "auth": { 00:16:54.173 "state": "completed", 00:16:54.173 "digest": "sha512", 00:16:54.173 "dhgroup": "ffdhe6144" 00:16:54.173 } 00:16:54.173 } 00:16:54.173 ]' 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.173 17:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.431 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:54.431 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.998 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.258 17:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:55.517 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.517 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.517 { 00:16:55.517 "cntlid": 135, 00:16:55.517 "qid": 0, 00:16:55.517 "state": "enabled", 00:16:55.517 "thread": "nvmf_tgt_poll_group_000", 00:16:55.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:55.517 "listen_address": { 00:16:55.517 "trtype": "TCP", 00:16:55.517 "adrfam": "IPv4", 00:16:55.517 "traddr": "10.0.0.2", 00:16:55.517 "trsvcid": "4420" 00:16:55.517 }, 00:16:55.517 "peer_address": { 00:16:55.517 "trtype": "TCP", 00:16:55.518 "adrfam": "IPv4", 00:16:55.518 "traddr": "10.0.0.1", 00:16:55.518 "trsvcid": "46656" 00:16:55.518 }, 00:16:55.518 "auth": { 00:16:55.518 "state": "completed", 00:16:55.518 "digest": "sha512", 00:16:55.518 "dhgroup": "ffdhe6144" 00:16:55.518 } 00:16:55.518 } 00:16:55.518 ]' 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:55.777 17:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.345 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.605 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.174 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.174 { 00:16:57.174 "cntlid": 137, 00:16:57.174 "qid": 0, 00:16:57.174 "state": "enabled", 00:16:57.174 "thread": "nvmf_tgt_poll_group_000", 00:16:57.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:57.174 "listen_address": { 00:16:57.174 "trtype": "TCP", 00:16:57.174 "adrfam": "IPv4", 00:16:57.174 "traddr": "10.0.0.2", 00:16:57.174 "trsvcid": "4420" 00:16:57.174 }, 00:16:57.174 "peer_address": { 00:16:57.174 "trtype": "TCP", 00:16:57.174 "adrfam": "IPv4", 00:16:57.174 "traddr": "10.0.0.1", 00:16:57.174 "trsvcid": "46684" 00:16:57.174 }, 00:16:57.174 "auth": { 00:16:57.174 "state": "completed", 00:16:57.174 "digest": "sha512", 00:16:57.174 "dhgroup": "ffdhe8192" 00:16:57.174 } 00:16:57.174 } 00:16:57.174 ]' 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.174 17:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:57.436 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.002 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.261 17:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.828 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.828 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.828 { 00:16:58.828 "cntlid": 139, 00:16:58.828 "qid": 0, 00:16:58.828 "state": "enabled", 00:16:58.828 "thread": "nvmf_tgt_poll_group_000", 00:16:58.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:16:58.829 "listen_address": { 00:16:58.829 "trtype": "TCP", 00:16:58.829 "adrfam": "IPv4", 00:16:58.829 "traddr": "10.0.0.2", 00:16:58.829 "trsvcid": "4420" 00:16:58.829 }, 00:16:58.829 "peer_address": { 00:16:58.829 "trtype": "TCP", 00:16:58.829 "adrfam": "IPv4", 00:16:58.829 "traddr": "10.0.0.1", 00:16:58.829 "trsvcid": "60382" 00:16:58.829 }, 00:16:58.829 "auth": { 00:16:58.829 "state": "completed", 00:16:58.829 "digest": "sha512", 00:16:58.829 "dhgroup": "ffdhe8192" 00:16:58.829 } 00:16:58.829 } 00:16:58.829 ]' 00:16:58.829 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.829 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.829 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:59.086 17:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: --dhchap-ctrl-secret DHHC-1:02:MmI5NDgxODkxYWY0NGYxOTkwMTNiNDU5NWYxMzdhM2NkODU1MjA0NWRhYzE1M2MyfeP7cQ==: 00:16:59.651 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.910 17:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.476 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.476 { 00:17:00.476 "cntlid": 141, 00:17:00.476 "qid": 0, 00:17:00.476 "state": "enabled", 00:17:00.476 "thread": "nvmf_tgt_poll_group_000", 00:17:00.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:00.476 "listen_address": { 00:17:00.476 "trtype": "TCP", 00:17:00.476 "adrfam": "IPv4", 00:17:00.476 "traddr": "10.0.0.2", 00:17:00.476 "trsvcid": "4420" 00:17:00.476 }, 00:17:00.476 "peer_address": { 00:17:00.476 "trtype": "TCP", 00:17:00.476 "adrfam": "IPv4", 00:17:00.476 "traddr": "10.0.0.1", 00:17:00.476 "trsvcid": "60418" 00:17:00.476 }, 00:17:00.476 "auth": { 00:17:00.476 "state": "completed", 00:17:00.476 "digest": "sha512", 00:17:00.476 "dhgroup": "ffdhe8192" 00:17:00.476 } 00:17:00.476 } 00:17:00.476 ]' 00:17:00.476 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:17:00.734 17:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:01:OTM5ZGUyM2Y4OTllNjUzM2Y1MTQ2ZDRmOGQwN2ZhMjCJwLff: 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.300 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.558 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.559 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.125 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.125 { 00:17:02.125 "cntlid": 143, 00:17:02.125 "qid": 0, 00:17:02.125 "state": "enabled", 00:17:02.125 "thread": "nvmf_tgt_poll_group_000", 00:17:02.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:02.125 "listen_address": { 00:17:02.125 "trtype": "TCP", 00:17:02.125 "adrfam": "IPv4", 00:17:02.125 "traddr": "10.0.0.2", 00:17:02.125 "trsvcid": "4420" 00:17:02.125 }, 00:17:02.125 "peer_address": { 00:17:02.125 "trtype": "TCP", 00:17:02.125 "adrfam": "IPv4", 00:17:02.125 "traddr": "10.0.0.1", 00:17:02.125 "trsvcid": "60444" 00:17:02.125 }, 00:17:02.125 "auth": { 00:17:02.125 "state": "completed", 00:17:02.125 "digest": "sha512", 00:17:02.125 "dhgroup": "ffdhe8192" 00:17:02.125 } 00:17:02.125 } 00:17:02.125 ]' 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.125 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.382 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.382 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.382 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.382 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.382 17:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.382 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:17:02.382 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.947 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.207 17:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.775 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.775 { 00:17:03.775 "cntlid": 145, 00:17:03.775 "qid": 0, 00:17:03.775 "state": "enabled", 00:17:03.775 "thread": "nvmf_tgt_poll_group_000", 00:17:03.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:03.775 "listen_address": { 00:17:03.775 "trtype": "TCP", 00:17:03.775 "adrfam": "IPv4", 00:17:03.775 "traddr": "10.0.0.2", 00:17:03.775 "trsvcid": "4420" 00:17:03.775 }, 00:17:03.775 "peer_address": { 00:17:03.775 "trtype": "TCP", 00:17:03.775 "adrfam": "IPv4", 00:17:03.775 "traddr": "10.0.0.1", 00:17:03.775 "trsvcid": "60468" 00:17:03.775 }, 00:17:03.775 "auth": { 00:17:03.775 "state": "completed", 00:17:03.775 "digest": "sha512", 00:17:03.775 "dhgroup": "ffdhe8192" 00:17:03.775 } 00:17:03.775 } 00:17:03.775 ]' 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.775 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.034 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.034 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.034 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.034 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:17:04.034 17:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ZjMwYjRjMjQ4YjAxNTdmZmVmNGVkYmFkY2E5NTJhMDMzMTM3OTYwY2ZjOWJhNDk5ZJCS+Q==: --dhchap-ctrl-secret DHHC-1:03:ZjA3YTRjZmQxYTczNzE2ZjliNWExNjY4YmNjMmY1YTI4NDkxNDIwYzIzOTdlMmMxOGE5ZDgwOTRlMWFjZDA1MXzKGXQ=: 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.602 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:04.603 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:05.176 request: 00:17:05.176 { 00:17:05.176 "name": "nvme0", 00:17:05.176 "trtype": "tcp", 00:17:05.176 "traddr": "10.0.0.2", 00:17:05.176 "adrfam": "ipv4", 00:17:05.176 "trsvcid": "4420", 00:17:05.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:05.176 "prchk_reftag": false, 00:17:05.176 "prchk_guard": false, 00:17:05.176 "hdgst": false, 00:17:05.176 "ddgst": false, 00:17:05.176 "dhchap_key": "key2", 00:17:05.176 "allow_unrecognized_csi": false, 00:17:05.176 "method": "bdev_nvme_attach_controller", 00:17:05.176 "req_id": 1 00:17:05.176 } 00:17:05.176 Got JSON-RPC error response 00:17:05.176 response: 00:17:05.176 { 00:17:05.176 "code": -5, 00:17:05.176 "message": "Input/output error" 00:17:05.176 } 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.176 17:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.436 request: 00:17:05.436 { 00:17:05.436 "name": "nvme0", 00:17:05.436 "trtype": "tcp", 00:17:05.436 "traddr": "10.0.0.2", 00:17:05.436 "adrfam": "ipv4", 00:17:05.436 "trsvcid": "4420", 00:17:05.436 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:05.436 "prchk_reftag": false, 00:17:05.436 "prchk_guard": false, 00:17:05.436 "hdgst": false, 00:17:05.436 "ddgst": false, 00:17:05.436 "dhchap_key": "key1", 00:17:05.436 "dhchap_ctrlr_key": "ckey2", 00:17:05.436 "allow_unrecognized_csi": false, 00:17:05.436 "method": "bdev_nvme_attach_controller", 00:17:05.436 "req_id": 1 00:17:05.436 } 00:17:05.436 Got JSON-RPC error response 00:17:05.436 response: 00:17:05.436 { 00:17:05.436 "code": -5, 00:17:05.436 "message": "Input/output error" 00:17:05.436 } 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.436 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.694 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.952 request: 00:17:05.952 { 00:17:05.952 "name": "nvme0", 00:17:05.952 "trtype": "tcp", 00:17:05.952 "traddr": "10.0.0.2", 00:17:05.952 "adrfam": "ipv4", 00:17:05.952 "trsvcid": "4420", 00:17:05.952 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:05.952 "prchk_reftag": false, 00:17:05.952 "prchk_guard": false, 00:17:05.952 "hdgst": false, 00:17:05.952 "ddgst": false, 00:17:05.952 "dhchap_key": "key1", 00:17:05.952 "dhchap_ctrlr_key": "ckey1", 00:17:05.952 "allow_unrecognized_csi": false, 00:17:05.952 "method": "bdev_nvme_attach_controller", 00:17:05.952 "req_id": 1 00:17:05.952 } 00:17:05.952 Got JSON-RPC error response 00:17:05.952 response: 00:17:05.952 { 00:17:05.952 "code": -5, 00:17:05.952 "message": "Input/output error" 00:17:05.952 } 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2989899 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2989899 ']' 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2989899 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989899 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.952 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.953 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989899' 00:17:05.953 killing process with pid 2989899 00:17:05.953 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2989899 00:17:05.953 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2989899 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3015821 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3015821 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3015821 ']' 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.212 17:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3015821 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3015821 ']' 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.212 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.471 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.471 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:06.471 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:06.471 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.471 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.471 null0 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Tps 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.D3B ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D3B 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QAI 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.J9M ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.J9M 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bfx 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.MRw ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MRw 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.faF 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.731 17:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.300 nvme0n1 00:17:07.300 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.300 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.300 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.559 { 00:17:07.559 "cntlid": 1, 00:17:07.559 "qid": 0, 00:17:07.559 "state": "enabled", 00:17:07.559 "thread": "nvmf_tgt_poll_group_000", 00:17:07.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:07.559 "listen_address": { 00:17:07.559 "trtype": "TCP", 00:17:07.559 "adrfam": "IPv4", 00:17:07.559 "traddr": "10.0.0.2", 00:17:07.559 "trsvcid": "4420" 00:17:07.559 }, 00:17:07.559 "peer_address": { 00:17:07.559 "trtype": "TCP", 00:17:07.559 "adrfam": "IPv4", 00:17:07.559 "traddr": "10.0.0.1", 00:17:07.559 "trsvcid": "60534" 00:17:07.559 }, 00:17:07.559 "auth": { 00:17:07.559 "state": "completed", 00:17:07.559 "digest": "sha512", 00:17:07.559 "dhgroup": "ffdhe8192" 00:17:07.559 } 00:17:07.559 } 00:17:07.559 ]' 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.559 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.817 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:17:07.817 17:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:08.386 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.646 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.905 request: 00:17:08.905 { 00:17:08.905 "name": "nvme0", 00:17:08.905 "trtype": "tcp", 00:17:08.905 "traddr": "10.0.0.2", 00:17:08.905 "adrfam": "ipv4", 00:17:08.905 "trsvcid": "4420", 00:17:08.905 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:08.905 "prchk_reftag": false, 00:17:08.905 "prchk_guard": false, 00:17:08.905 "hdgst": false, 00:17:08.905 "ddgst": false, 00:17:08.905 "dhchap_key": "key3", 00:17:08.905 "allow_unrecognized_csi": false, 00:17:08.905 "method": "bdev_nvme_attach_controller", 00:17:08.905 "req_id": 1 00:17:08.905 } 00:17:08.905 Got JSON-RPC error response 00:17:08.905 response: 00:17:08.905 { 00:17:08.905 "code": -5, 00:17:08.905 "message": "Input/output error" 00:17:08.905 } 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.905 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.906 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.906 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.906 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.166 request: 00:17:09.166 { 00:17:09.166 "name": "nvme0", 00:17:09.166 "trtype": "tcp", 00:17:09.166 "traddr": "10.0.0.2", 00:17:09.166 "adrfam": "ipv4", 00:17:09.166 "trsvcid": "4420", 00:17:09.166 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:09.166 "prchk_reftag": false, 00:17:09.166 "prchk_guard": false, 00:17:09.166 "hdgst": false, 00:17:09.166 "ddgst": false, 00:17:09.166 "dhchap_key": "key3", 00:17:09.166 "allow_unrecognized_csi": false, 00:17:09.166 "method": "bdev_nvme_attach_controller", 00:17:09.166 "req_id": 1 00:17:09.166 } 00:17:09.166 Got JSON-RPC error response 00:17:09.166 response: 00:17:09.166 { 00:17:09.166 "code": -5, 00:17:09.166 "message": "Input/output error" 00:17:09.166 } 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.166 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.426 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.426 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:09.426 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.426 17:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.426 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.685 request: 00:17:09.685 { 00:17:09.685 "name": "nvme0", 00:17:09.685 "trtype": "tcp", 00:17:09.685 "traddr": "10.0.0.2", 00:17:09.685 "adrfam": "ipv4", 00:17:09.685 "trsvcid": "4420", 00:17:09.685 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:09.685 "prchk_reftag": false, 00:17:09.685 "prchk_guard": false, 00:17:09.685 "hdgst": false, 00:17:09.685 "ddgst": false, 00:17:09.685 "dhchap_key": "key0", 00:17:09.685 "dhchap_ctrlr_key": "key1", 00:17:09.685 "allow_unrecognized_csi": false, 00:17:09.685 "method": "bdev_nvme_attach_controller", 00:17:09.685 "req_id": 1 00:17:09.685 } 00:17:09.685 Got JSON-RPC error response 00:17:09.685 response: 00:17:09.685 { 00:17:09.685 "code": -5, 00:17:09.685 "message": "Input/output error" 00:17:09.685 } 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.685 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:09.945 nvme0n1 00:17:09.945 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:09.945 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:09.945 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.945 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.945 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.945 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.206 17:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:10.775 nvme0n1 00:17:10.775 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:10.776 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:10.776 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:11.035 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.294 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.294 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:17:11.294 17:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: --dhchap-ctrl-secret DHHC-1:03:OWJlZmUzYzBhY2U1MmNjOTAzODk0NWJhODM1NzlhNjYxZGYwNDU1MjM1MzlmYWUwYjM5YTU4ZWUwNWVlNGZhZBAPI58=: 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:11.863 17:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.431 request: 00:17:12.431 { 00:17:12.431 "name": "nvme0", 00:17:12.431 "trtype": "tcp", 00:17:12.431 "traddr": "10.0.0.2", 00:17:12.431 "adrfam": "ipv4", 00:17:12.431 "trsvcid": "4420", 00:17:12.431 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:12.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:17:12.431 "prchk_reftag": false, 00:17:12.431 "prchk_guard": false, 00:17:12.431 "hdgst": false, 00:17:12.431 "ddgst": false, 00:17:12.431 "dhchap_key": "key1", 00:17:12.431 "allow_unrecognized_csi": false, 00:17:12.431 "method": "bdev_nvme_attach_controller", 00:17:12.431 "req_id": 1 00:17:12.431 } 00:17:12.431 Got JSON-RPC error response 00:17:12.431 response: 00:17:12.431 { 00:17:12.431 "code": -5, 00:17:12.431 "message": "Input/output error" 00:17:12.431 } 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.431 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.999 nvme0n1 00:17:12.999 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:12.999 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:12.999 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.258 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.258 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.258 17:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:13.517 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:13.775 nvme0n1 00:17:13.775 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:13.775 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:13.775 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.775 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.775 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.775 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: '' 2s 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: ]] 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZTc1YTExN2JjYjhkZjdhNmJiZDk2ZWQ0NjBkZDE0ZDgvTQjp: 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:14.034 17:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: 2s 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:15.939 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:15.940 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:15.940 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: ]] 00:17:15.940 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjY2ZjRmZDgwNTM4ZmM0YzQ0NzJkNjUzYjcyYjk2YmVhNWZhNTY4M2UxYjhkMDBiuaLzTg==: 00:17:15.940 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:15.940 17:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:18.473 17:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:18.731 nvme0n1 00:17:18.731 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.731 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.731 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.731 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.732 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.732 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.299 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:19.299 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:19.299 17:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:19.558 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.816 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:20.075 request: 00:17:20.075 { 00:17:20.075 "name": "nvme0", 00:17:20.075 "dhchap_key": "key1", 00:17:20.075 "dhchap_ctrlr_key": "key3", 00:17:20.076 "method": "bdev_nvme_set_keys", 00:17:20.076 "req_id": 1 00:17:20.076 } 00:17:20.076 Got JSON-RPC error response 00:17:20.076 response: 00:17:20.076 { 00:17:20.076 "code": -13, 00:17:20.076 "message": "Permission denied" 00:17:20.076 } 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:20.076 17:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.333 17:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:20.334 17:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:21.270 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:21.270 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.270 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:21.530 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.469 nvme0n1 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.469 17:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.729 request: 00:17:22.729 { 00:17:22.729 "name": "nvme0", 00:17:22.729 "dhchap_key": "key2", 00:17:22.729 "dhchap_ctrlr_key": "key0", 00:17:22.729 "method": "bdev_nvme_set_keys", 00:17:22.729 "req_id": 1 00:17:22.729 } 00:17:22.729 Got JSON-RPC error response 00:17:22.729 response: 00:17:22.729 { 00:17:22.729 "code": -13, 00:17:22.729 "message": "Permission denied" 00:17:22.729 } 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:22.729 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.052 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:23.052 17:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2989929 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2989929 ']' 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2989929 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2989929 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2989929' 00:17:24.039 killing process with pid 2989929 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2989929 00:17:24.039 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2989929 00:17:24.299 17:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.299 rmmod nvme_tcp 00:17:24.299 rmmod nvme_fabrics 00:17:24.299 rmmod nvme_keyring 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3015821 ']' 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3015821 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3015821 ']' 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3015821 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015821 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.299 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.300 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015821' 00:17:24.300 killing process with pid 3015821 00:17:24.300 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3015821 00:17:24.300 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3015821 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.559 17:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.466 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:26.466 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Tps /tmp/spdk.key-sha256.QAI /tmp/spdk.key-sha384.bfx /tmp/spdk.key-sha512.faF /tmp/spdk.key-sha512.D3B /tmp/spdk.key-sha384.J9M /tmp/spdk.key-sha256.MRw '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:26.466 00:17:26.466 real 2m18.345s 00:17:26.466 user 5m9.759s 00:17:26.466 sys 0m20.194s 00:17:26.466 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.466 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.466 ************************************ 00:17:26.466 END TEST nvmf_auth_target 00:17:26.467 ************************************ 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.725 ************************************ 00:17:26.725 START TEST nvmf_bdevio_no_huge 00:17:26.725 ************************************ 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:26.725 * Looking for test storage... 00:17:26.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:26.725 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.726 --rc genhtml_branch_coverage=1 00:17:26.726 --rc genhtml_function_coverage=1 00:17:26.726 --rc genhtml_legend=1 00:17:26.726 --rc geninfo_all_blocks=1 00:17:26.726 --rc geninfo_unexecuted_blocks=1 00:17:26.726 00:17:26.726 ' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.726 --rc genhtml_branch_coverage=1 00:17:26.726 --rc genhtml_function_coverage=1 00:17:26.726 --rc genhtml_legend=1 00:17:26.726 --rc geninfo_all_blocks=1 00:17:26.726 --rc geninfo_unexecuted_blocks=1 00:17:26.726 00:17:26.726 ' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.726 --rc genhtml_branch_coverage=1 00:17:26.726 --rc genhtml_function_coverage=1 00:17:26.726 --rc genhtml_legend=1 00:17:26.726 --rc geninfo_all_blocks=1 00:17:26.726 --rc geninfo_unexecuted_blocks=1 00:17:26.726 00:17:26.726 ' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.726 --rc genhtml_branch_coverage=1 00:17:26.726 --rc genhtml_function_coverage=1 00:17:26.726 --rc genhtml_legend=1 00:17:26.726 --rc geninfo_all_blocks=1 00:17:26.726 --rc geninfo_unexecuted_blocks=1 00:17:26.726 00:17:26.726 ' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.726 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.727 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:26.727 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:26.727 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.727 17:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:33.300 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:33.300 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:33.300 Found net devices under 0000:31:00.0: cvl_0_0 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.300 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:33.301 Found net devices under 0000:31:00.1: cvl_0_1 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.301 17:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:17:33.301 00:17:33.301 --- 10.0.0.2 ping statistics --- 00:17:33.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.301 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:17:33.301 00:17:33.301 --- 10.0.0.1 ping statistics --- 00:17:33.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.301 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3024846 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3024846 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3024846 ']' 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:33.301 [2024-12-06 17:54:20.173483] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:17:33.301 [2024-12-06 17:54:20.173550] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:33.301 [2024-12-06 17:54:20.276020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.301 [2024-12-06 17:54:20.335538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.301 [2024-12-06 17:54:20.335581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.301 [2024-12-06 17:54:20.335590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.301 [2024-12-06 17:54:20.335597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.301 [2024-12-06 17:54:20.335603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.301 [2024-12-06 17:54:20.337174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:33.301 [2024-12-06 17:54:20.337337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:33.301 [2024-12-06 17:54:20.337534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.301 [2024-12-06 17:54:20.337534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.301 17:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 [2024-12-06 17:54:21.015509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 Malloc0 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.301 [2024-12-06 17:54:21.053201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:33.301 { 00:17:33.301 "params": { 00:17:33.301 "name": "Nvme$subsystem", 00:17:33.301 "trtype": "$TEST_TRANSPORT", 00:17:33.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:33.301 "adrfam": "ipv4", 00:17:33.301 "trsvcid": "$NVMF_PORT", 00:17:33.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:33.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:33.301 "hdgst": ${hdgst:-false}, 00:17:33.301 "ddgst": ${ddgst:-false} 00:17:33.301 }, 00:17:33.301 "method": "bdev_nvme_attach_controller" 00:17:33.301 } 00:17:33.301 EOF 00:17:33.301 )") 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:33.301 17:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:33.301 "params": { 00:17:33.301 "name": "Nvme1", 00:17:33.301 "trtype": "tcp", 00:17:33.301 "traddr": "10.0.0.2", 00:17:33.301 "adrfam": "ipv4", 00:17:33.301 "trsvcid": "4420", 00:17:33.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.301 "hdgst": false, 00:17:33.301 "ddgst": false 00:17:33.301 }, 00:17:33.301 "method": "bdev_nvme_attach_controller" 00:17:33.301 }' 00:17:33.301 [2024-12-06 17:54:21.094072] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:17:33.301 [2024-12-06 17:54:21.094148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3024923 ] 00:17:33.561 [2024-12-06 17:54:21.183327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:33.561 [2024-12-06 17:54:21.237780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.561 [2024-12-06 17:54:21.237934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.561 [2024-12-06 17:54:21.237934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.820 I/O targets: 00:17:33.820 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:33.820 00:17:33.820 00:17:33.820 CUnit - A unit testing framework for C - Version 2.1-3 00:17:33.820 http://cunit.sourceforge.net/ 00:17:33.820 00:17:33.820 00:17:33.820 Suite: bdevio tests on: Nvme1n1 00:17:33.820 Test: blockdev write read block ...passed 00:17:33.820 Test: blockdev write zeroes read block ...passed 00:17:33.820 Test: blockdev write zeroes read no split ...passed 00:17:33.820 Test: blockdev write zeroes read split ...passed 00:17:33.820 Test: blockdev write zeroes read split partial ...passed 00:17:33.820 Test: blockdev reset ...[2024-12-06 17:54:21.564577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:33.820 [2024-12-06 17:54:21.564650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf1f70 (9): Bad file descriptor 00:17:33.820 [2024-12-06 17:54:21.617704] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:33.820 passed 00:17:33.820 Test: blockdev write read 8 blocks ...passed 00:17:33.820 Test: blockdev write read size > 128k ...passed 00:17:33.820 Test: blockdev write read invalid size ...passed 00:17:34.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:34.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:34.079 Test: blockdev write read max offset ...passed 00:17:34.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:34.079 Test: blockdev writev readv 8 blocks ...passed 00:17:34.079 Test: blockdev writev readv 30 x 1block ...passed 00:17:34.079 Test: blockdev writev readv block ...passed 00:17:34.079 Test: blockdev writev readv size > 128k ...passed 00:17:34.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:34.079 Test: blockdev comparev and writev ...[2024-12-06 17:54:21.795331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.795363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.795379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.795388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.795710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.795722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.795735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.795743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.796033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.796044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.796058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.796066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.796397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.796408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.796422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:34.079 [2024-12-06 17:54:21.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:34.079 passed 00:17:34.079 Test: blockdev nvme passthru rw ...passed 00:17:34.079 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:54:21.880550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.079 [2024-12-06 17:54:21.880565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.880798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.079 [2024-12-06 17:54:21.880808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.881024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.079 [2024-12-06 17:54:21.881034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:34.079 [2024-12-06 17:54:21.881262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:34.079 [2024-12-06 17:54:21.881272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:34.079 passed 00:17:34.079 Test: blockdev nvme admin passthru ...passed 00:17:34.339 Test: blockdev copy ...passed 00:17:34.339 00:17:34.339 Run Summary: Type Total Ran Passed Failed Inactive 00:17:34.339 suites 1 1 n/a 0 0 00:17:34.339 tests 23 23 23 0 0 00:17:34.339 asserts 152 152 152 0 n/a 00:17:34.339 00:17:34.339 Elapsed time = 0.981 seconds 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.598 rmmod nvme_tcp 00:17:34.598 rmmod nvme_fabrics 00:17:34.598 rmmod nvme_keyring 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3024846 ']' 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3024846 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3024846 ']' 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3024846 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3024846 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3024846' 00:17:34.598 killing process with pid 3024846 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3024846 00:17:34.598 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3024846 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.858 17:54:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.793 00:17:36.793 real 0m10.261s 00:17:36.793 user 0m12.020s 00:17:36.793 sys 0m5.229s 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.793 ************************************ 00:17:36.793 END TEST nvmf_bdevio_no_huge 00:17:36.793 ************************************ 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.793 17:54:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.054 ************************************ 00:17:37.054 START TEST nvmf_tls 00:17:37.054 ************************************ 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:37.054 * Looking for test storage... 00:17:37.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.054 --rc genhtml_branch_coverage=1 00:17:37.054 --rc genhtml_function_coverage=1 00:17:37.054 --rc genhtml_legend=1 00:17:37.054 --rc geninfo_all_blocks=1 00:17:37.054 --rc geninfo_unexecuted_blocks=1 00:17:37.054 00:17:37.054 ' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.054 --rc genhtml_branch_coverage=1 00:17:37.054 --rc genhtml_function_coverage=1 00:17:37.054 --rc genhtml_legend=1 00:17:37.054 --rc geninfo_all_blocks=1 00:17:37.054 --rc geninfo_unexecuted_blocks=1 00:17:37.054 00:17:37.054 ' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.054 --rc genhtml_branch_coverage=1 00:17:37.054 --rc genhtml_function_coverage=1 00:17:37.054 --rc genhtml_legend=1 00:17:37.054 --rc geninfo_all_blocks=1 00:17:37.054 --rc geninfo_unexecuted_blocks=1 00:17:37.054 00:17:37.054 ' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.054 --rc genhtml_branch_coverage=1 00:17:37.054 --rc genhtml_function_coverage=1 00:17:37.054 --rc genhtml_legend=1 00:17:37.054 --rc geninfo_all_blocks=1 00:17:37.054 --rc geninfo_unexecuted_blocks=1 00:17:37.054 00:17:37.054 ' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.054 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.055 17:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:43.622 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:43.622 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:43.622 Found net devices under 0000:31:00.0: cvl_0_0 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:43.622 Found net devices under 0000:31:00.1: cvl_0_1 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.602 ms 00:17:43.622 00:17:43.622 --- 10.0.0.2 ping statistics --- 00:17:43.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.622 rtt min/avg/max/mdev = 0.602/0.602/0.602/0.000 ms 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:17:43.622 00:17:43.622 --- 10.0.0.1 ping statistics --- 00:17:43.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.622 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.622 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3029843 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3029843 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3029843 ']' 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.623 17:54:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.623 [2024-12-06 17:54:30.885008] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:17:43.623 [2024-12-06 17:54:30.885077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.623 [2024-12-06 17:54:30.979713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.623 [2024-12-06 17:54:31.030896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.623 [2024-12-06 17:54:31.030951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.623 [2024-12-06 17:54:31.030960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.623 [2024-12-06 17:54:31.030967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.623 [2024-12-06 17:54:31.030974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.623 [2024-12-06 17:54:31.031751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:43.882 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:44.141 true 00:17:44.141 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.141 17:54:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:44.400 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:44.400 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:44.400 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:44.400 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.400 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:44.659 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:44.659 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:44.659 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:44.918 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:44.918 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.918 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:44.919 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:44.919 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.919 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:45.177 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:45.177 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:45.177 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:45.177 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.177 17:54:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:45.434 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:45.434 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:45.434 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.QzUlFIWUbK 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.SbsIMzouIh 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QzUlFIWUbK 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.SbsIMzouIh 00:17:45.692 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:45.950 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:46.208 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.QzUlFIWUbK 00:17:46.208 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QzUlFIWUbK 00:17:46.208 17:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:46.208 [2024-12-06 17:54:34.022294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.466 17:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:46.466 17:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:46.724 [2024-12-06 17:54:34.355103] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:46.724 [2024-12-06 17:54:34.355306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.724 17:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:46.724 malloc0 00:17:46.724 17:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.982 17:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QzUlFIWUbK 00:17:47.241 17:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:47.241 17:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QzUlFIWUbK 00:17:59.453 Initializing NVMe Controllers 00:17:59.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.453 Initialization complete. Launching workers. 00:17:59.453 ======================================================== 00:17:59.453 Latency(us) 00:17:59.453 Device Information : IOPS MiB/s Average min max 00:17:59.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18671.69 72.94 3427.88 1033.82 3995.83 00:17:59.453 ======================================================== 00:17:59.453 Total : 18671.69 72.94 3427.88 1033.82 3995.83 00:17:59.453 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QzUlFIWUbK 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QzUlFIWUbK 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3032924 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3032924 /var/tmp/bdevperf.sock 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3032924 ']' 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:59.453 [2024-12-06 17:54:45.133984] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:17:59.453 [2024-12-06 17:54:45.134040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032924 ] 00:17:59.453 [2024-12-06 17:54:45.212024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.453 [2024-12-06 17:54:45.247241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.453 17:54:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QzUlFIWUbK 00:17:59.453 17:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:59.453 [2024-12-06 17:54:46.203998] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.453 TLSTESTn1 00:17:59.453 17:54:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:59.453 Running I/O for 10 seconds... 00:18:00.678 5041.00 IOPS, 19.69 MiB/s [2024-12-06T16:54:49.441Z] 4210.00 IOPS, 16.45 MiB/s [2024-12-06T16:54:50.380Z] 3990.67 IOPS, 15.59 MiB/s [2024-12-06T16:54:51.762Z] 4013.25 IOPS, 15.68 MiB/s [2024-12-06T16:54:52.701Z] 4187.40 IOPS, 16.36 MiB/s [2024-12-06T16:54:53.641Z] 4135.67 IOPS, 16.15 MiB/s [2024-12-06T16:54:54.577Z] 4119.57 IOPS, 16.09 MiB/s [2024-12-06T16:54:55.516Z] 4145.25 IOPS, 16.19 MiB/s [2024-12-06T16:54:56.452Z] 4272.44 IOPS, 16.69 MiB/s [2024-12-06T16:54:56.452Z] 4283.30 IOPS, 16.73 MiB/s 00:18:08.625 Latency(us) 00:18:08.625 [2024-12-06T16:54:56.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.625 Verification LBA range: start 0x0 length 0x2000 00:18:08.625 TLSTESTn1 : 10.03 4282.93 16.73 0.00 0.00 29821.88 5406.72 31020.37 00:18:08.625 [2024-12-06T16:54:56.452Z] =================================================================================================================== 00:18:08.625 [2024-12-06T16:54:56.452Z] Total : 4282.93 16.73 0.00 0.00 29821.88 5406.72 31020.37 00:18:08.625 { 00:18:08.625 "results": [ 00:18:08.625 { 00:18:08.625 "job": "TLSTESTn1", 00:18:08.625 "core_mask": "0x4", 00:18:08.625 "workload": "verify", 00:18:08.625 "status": "finished", 00:18:08.625 "verify_range": { 00:18:08.625 "start": 0, 00:18:08.625 "length": 8192 00:18:08.625 }, 00:18:08.625 "queue_depth": 128, 00:18:08.625 "io_size": 4096, 00:18:08.625 "runtime": 10.030528, 00:18:08.625 "iops": 4282.925086296554, 00:18:08.625 "mibps": 16.730176118345913, 00:18:08.625 "io_failed": 0, 00:18:08.625 "io_timeout": 0, 00:18:08.625 "avg_latency_us": 29821.875406579766, 00:18:08.625 "min_latency_us": 5406.72, 00:18:08.625 "max_latency_us": 31020.373333333333 00:18:08.625 } 00:18:08.625 ], 00:18:08.625 "core_count": 1 00:18:08.625 } 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3032924 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3032924 ']' 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3032924 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.625 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032924 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032924' 00:18:08.885 killing process with pid 3032924 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3032924 00:18:08.885 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.885 00:18:08.885 Latency(us) 00:18:08.885 [2024-12-06T16:54:56.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.885 [2024-12-06T16:54:56.712Z] =================================================================================================================== 00:18:08.885 [2024-12-06T16:54:56.712Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3032924 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbsIMzouIh 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbsIMzouIh 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SbsIMzouIh 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SbsIMzouIh 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.885 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3035286 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3035286 /var/tmp/bdevperf.sock 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035286 ']' 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.886 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.886 [2024-12-06 17:54:56.604201] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:08.886 [2024-12-06 17:54:56.604256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035286 ] 00:18:08.886 [2024-12-06 17:54:56.668887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.886 [2024-12-06 17:54:56.697944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.146 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.147 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.147 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SbsIMzouIh 00:18:09.147 17:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:09.407 [2024-12-06 17:54:57.060365] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.407 [2024-12-06 17:54:57.069695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.407 [2024-12-06 17:54:57.070415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a35b0 (107): Transport endpoint is not connected 00:18:09.408 [2024-12-06 17:54:57.071411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a35b0 (9): Bad file descriptor 00:18:09.408 [2024-12-06 17:54:57.072412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:09.408 [2024-12-06 17:54:57.072422] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.408 [2024-12-06 17:54:57.072429] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:09.408 [2024-12-06 17:54:57.072435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:09.408 request: 00:18:09.408 { 00:18:09.408 "name": "TLSTEST", 00:18:09.408 "trtype": "tcp", 00:18:09.408 "traddr": "10.0.0.2", 00:18:09.408 "adrfam": "ipv4", 00:18:09.408 "trsvcid": "4420", 00:18:09.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.408 "prchk_reftag": false, 00:18:09.408 "prchk_guard": false, 00:18:09.408 "hdgst": false, 00:18:09.408 "ddgst": false, 00:18:09.408 "psk": "key0", 00:18:09.408 "allow_unrecognized_csi": false, 00:18:09.408 "method": "bdev_nvme_attach_controller", 00:18:09.408 "req_id": 1 00:18:09.408 } 00:18:09.408 Got JSON-RPC error response 00:18:09.408 response: 00:18:09.408 { 00:18:09.408 "code": -5, 00:18:09.408 "message": "Input/output error" 00:18:09.408 } 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3035286 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035286 ']' 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035286 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035286 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035286' 00:18:09.408 killing process with pid 3035286 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035286 00:18:09.408 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.408 00:18:09.408 Latency(us) 00:18:09.408 [2024-12-06T16:54:57.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.408 [2024-12-06T16:54:57.235Z] =================================================================================================================== 00:18:09.408 [2024-12-06T16:54:57.235Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035286 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QzUlFIWUbK 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QzUlFIWUbK 00:18:09.408 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QzUlFIWUbK 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QzUlFIWUbK 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3035600 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3035600 /var/tmp/bdevperf.sock 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035600 ']' 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.668 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.668 [2024-12-06 17:54:57.269747] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:09.669 [2024-12-06 17:54:57.269802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035600 ] 00:18:09.669 [2024-12-06 17:54:57.334737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.669 [2024-12-06 17:54:57.363437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.669 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.669 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.669 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QzUlFIWUbK 00:18:09.929 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:09.929 [2024-12-06 17:54:57.733817] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.929 [2024-12-06 17:54:57.741221] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.929 [2024-12-06 17:54:57.741240] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.929 [2024-12-06 17:54:57.741260] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.929 [2024-12-06 17:54:57.741987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ed5b0 (107): Transport endpoint is not connected 00:18:09.929 [2024-12-06 17:54:57.742983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ed5b0 (9): Bad file descriptor 00:18:09.929 [2024-12-06 17:54:57.743984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:09.929 [2024-12-06 17:54:57.743992] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.929 [2024-12-06 17:54:57.743998] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:09.929 [2024-12-06 17:54:57.744005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:09.929 request: 00:18:09.929 { 00:18:09.929 "name": "TLSTEST", 00:18:09.929 "trtype": "tcp", 00:18:09.929 "traddr": "10.0.0.2", 00:18:09.929 "adrfam": "ipv4", 00:18:09.929 "trsvcid": "4420", 00:18:09.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.929 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:09.929 "prchk_reftag": false, 00:18:09.929 "prchk_guard": false, 00:18:09.929 "hdgst": false, 00:18:09.929 "ddgst": false, 00:18:09.929 "psk": "key0", 00:18:09.929 "allow_unrecognized_csi": false, 00:18:09.929 "method": "bdev_nvme_attach_controller", 00:18:09.929 "req_id": 1 00:18:09.929 } 00:18:09.929 Got JSON-RPC error response 00:18:09.929 response: 00:18:09.929 { 00:18:09.929 "code": -5, 00:18:09.929 "message": "Input/output error" 00:18:09.929 } 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3035600 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035600 ']' 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035600 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035600 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035600' 00:18:10.190 killing process with pid 3035600 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035600 00:18:10.190 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.190 00:18:10.190 Latency(us) 00:18:10.190 [2024-12-06T16:54:58.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.190 [2024-12-06T16:54:58.017Z] =================================================================================================================== 00:18:10.190 [2024-12-06T16:54:58.017Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035600 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QzUlFIWUbK 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QzUlFIWUbK 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QzUlFIWUbK 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QzUlFIWUbK 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3035623 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3035623 /var/tmp/bdevperf.sock 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035623 ']' 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.190 17:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.190 [2024-12-06 17:54:57.934977] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:10.190 [2024-12-06 17:54:57.935033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035623 ] 00:18:10.190 [2024-12-06 17:54:58.000372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.451 [2024-12-06 17:54:58.028506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.451 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.451 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.451 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QzUlFIWUbK 00:18:10.451 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:10.711 [2024-12-06 17:54:58.398972] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.711 [2024-12-06 17:54:58.403454] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.711 [2024-12-06 17:54:58.403472] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.711 [2024-12-06 17:54:58.403491] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.711 [2024-12-06 17:54:58.404166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90c5b0 (107): Transport endpoint is not connected 00:18:10.711 [2024-12-06 17:54:58.405161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x90c5b0 (9): Bad file descriptor 00:18:10.711 [2024-12-06 17:54:58.406162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:10.711 [2024-12-06 17:54:58.406170] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.711 [2024-12-06 17:54:58.406176] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:10.711 [2024-12-06 17:54:58.406183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:10.711 request: 00:18:10.711 { 00:18:10.711 "name": "TLSTEST", 00:18:10.711 "trtype": "tcp", 00:18:10.712 "traddr": "10.0.0.2", 00:18:10.712 "adrfam": "ipv4", 00:18:10.712 "trsvcid": "4420", 00:18:10.712 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.712 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.712 "prchk_reftag": false, 00:18:10.712 "prchk_guard": false, 00:18:10.712 "hdgst": false, 00:18:10.712 "ddgst": false, 00:18:10.712 "psk": "key0", 00:18:10.712 "allow_unrecognized_csi": false, 00:18:10.712 "method": "bdev_nvme_attach_controller", 00:18:10.712 "req_id": 1 00:18:10.712 } 00:18:10.712 Got JSON-RPC error response 00:18:10.712 response: 00:18:10.712 { 00:18:10.712 "code": -5, 00:18:10.712 "message": "Input/output error" 00:18:10.712 } 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3035623 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035623 ']' 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035623 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035623 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035623' 00:18:10.712 killing process with pid 3035623 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035623 00:18:10.712 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.712 00:18:10.712 Latency(us) 00:18:10.712 [2024-12-06T16:54:58.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.712 [2024-12-06T16:54:58.539Z] =================================================================================================================== 00:18:10.712 [2024-12-06T16:54:58.539Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.712 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035623 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.972 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3035947 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3035947 /var/tmp/bdevperf.sock 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035947 ']' 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.973 [2024-12-06 17:54:58.597349] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:10.973 [2024-12-06 17:54:58.597405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035947 ] 00:18:10.973 [2024-12-06 17:54:58.662804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.973 [2024-12-06 17:54:58.690652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.973 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:11.233 [2024-12-06 17:54:58.900600] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:11.233 [2024-12-06 17:54:58.900627] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:11.233 request: 00:18:11.233 { 00:18:11.233 "name": "key0", 00:18:11.233 "path": "", 00:18:11.233 "method": "keyring_file_add_key", 00:18:11.233 "req_id": 1 00:18:11.233 } 00:18:11.233 Got JSON-RPC error response 00:18:11.233 response: 00:18:11.233 { 00:18:11.233 "code": -1, 00:18:11.233 "message": "Operation not permitted" 00:18:11.233 } 00:18:11.233 17:54:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.233 [2024-12-06 17:54:59.057069] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.233 [2024-12-06 17:54:59.057090] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:11.492 request: 00:18:11.492 { 00:18:11.492 "name": "TLSTEST", 00:18:11.492 "trtype": "tcp", 00:18:11.492 "traddr": "10.0.0.2", 00:18:11.492 "adrfam": "ipv4", 00:18:11.492 "trsvcid": "4420", 00:18:11.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.492 "prchk_reftag": false, 00:18:11.492 "prchk_guard": false, 00:18:11.492 "hdgst": false, 00:18:11.492 "ddgst": false, 00:18:11.492 "psk": "key0", 00:18:11.492 "allow_unrecognized_csi": false, 00:18:11.492 "method": "bdev_nvme_attach_controller", 00:18:11.492 "req_id": 1 00:18:11.492 } 00:18:11.492 Got JSON-RPC error response 00:18:11.492 response: 00:18:11.492 { 00:18:11.492 "code": -126, 00:18:11.492 "message": "Required key not available" 00:18:11.492 } 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3035947 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035947 ']' 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035947 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035947 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.492 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035947' 00:18:11.493 killing process with pid 3035947 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035947 00:18:11.493 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.493 00:18:11.493 Latency(us) 00:18:11.493 [2024-12-06T16:54:59.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.493 [2024-12-06T16:54:59.320Z] =================================================================================================================== 00:18:11.493 [2024-12-06T16:54:59.320Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035947 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3029843 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3029843 ']' 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3029843 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3029843 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3029843' 00:18:11.493 killing process with pid 3029843 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3029843 00:18:11.493 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3029843 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.gSh4r2wfLy 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.gSh4r2wfLy 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3035981 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3035981 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3035981 ']' 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.753 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.753 [2024-12-06 17:54:59.436545] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:11.753 [2024-12-06 17:54:59.436587] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.753 [2024-12-06 17:54:59.497517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.753 [2024-12-06 17:54:59.525399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.753 [2024-12-06 17:54:59.525428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.753 [2024-12-06 17:54:59.525433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.753 [2024-12-06 17:54:59.525438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.753 [2024-12-06 17:54:59.525442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.753 [2024-12-06 17:54:59.525922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.gSh4r2wfLy 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gSh4r2wfLy 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.014 [2024-12-06 17:54:59.765015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.014 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.273 17:54:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.273 [2024-12-06 17:55:00.077790] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.273 [2024-12-06 17:55:00.077994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.273 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.534 malloc0 00:18:12.535 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.794 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:12.794 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gSh4r2wfLy 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gSh4r2wfLy 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3036333 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3036333 /var/tmp/bdevperf.sock 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3036333 ']' 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.055 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.055 [2024-12-06 17:55:00.739276] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:13.055 [2024-12-06 17:55:00.739318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3036333 ] 00:18:13.055 [2024-12-06 17:55:00.796179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.055 [2024-12-06 17:55:00.824669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.315 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.315 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.315 17:55:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:13.315 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.574 [2024-12-06 17:55:01.191219] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.574 TLSTESTn1 00:18:13.574 17:55:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:13.574 Running I/O for 10 seconds... 00:18:15.894 4647.00 IOPS, 18.15 MiB/s [2024-12-06T16:55:04.662Z] 4932.00 IOPS, 19.27 MiB/s [2024-12-06T16:55:05.614Z] 4786.33 IOPS, 18.70 MiB/s [2024-12-06T16:55:06.548Z] 4750.00 IOPS, 18.55 MiB/s [2024-12-06T16:55:07.483Z] 4780.20 IOPS, 18.67 MiB/s [2024-12-06T16:55:08.437Z] 4917.50 IOPS, 19.21 MiB/s [2024-12-06T16:55:09.487Z] 5012.57 IOPS, 19.58 MiB/s [2024-12-06T16:55:10.537Z] 5035.00 IOPS, 19.67 MiB/s [2024-12-06T16:55:11.477Z] 4953.89 IOPS, 19.35 MiB/s [2024-12-06T16:55:11.477Z] 4984.40 IOPS, 19.47 MiB/s 00:18:23.650 Latency(us) 00:18:23.650 [2024-12-06T16:55:11.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.650 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.650 Verification LBA range: start 0x0 length 0x2000 00:18:23.650 TLSTESTn1 : 10.02 4985.10 19.47 0.00 0.00 25635.51 4587.52 32331.09 00:18:23.650 [2024-12-06T16:55:11.477Z] =================================================================================================================== 00:18:23.650 [2024-12-06T16:55:11.477Z] Total : 4985.10 19.47 0.00 0.00 25635.51 4587.52 32331.09 00:18:23.650 { 00:18:23.650 "results": [ 00:18:23.650 { 00:18:23.650 "job": "TLSTESTn1", 00:18:23.650 "core_mask": "0x4", 00:18:23.650 "workload": "verify", 00:18:23.650 "status": "finished", 00:18:23.650 "verify_range": { 00:18:23.650 "start": 0, 00:18:23.650 "length": 8192 00:18:23.650 }, 00:18:23.650 "queue_depth": 128, 00:18:23.650 "io_size": 4096, 00:18:23.650 "runtime": 10.024071, 00:18:23.650 "iops": 4985.100364911621, 00:18:23.650 "mibps": 19.47304830043602, 00:18:23.650 "io_failed": 0, 00:18:23.650 "io_timeout": 0, 00:18:23.650 "avg_latency_us": 25635.50861206166, 00:18:23.650 "min_latency_us": 4587.52, 00:18:23.650 "max_latency_us": 32331.093333333334 00:18:23.650 } 00:18:23.650 ], 00:18:23.650 "core_count": 1 00:18:23.650 } 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3036333 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3036333 ']' 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3036333 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3036333 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3036333' 00:18:23.650 killing process with pid 3036333 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3036333 00:18:23.650 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.650 00:18:23.650 Latency(us) 00:18:23.650 [2024-12-06T16:55:11.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.650 [2024-12-06T16:55:11.477Z] =================================================================================================================== 00:18:23.650 [2024-12-06T16:55:11.477Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.650 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3036333 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.gSh4r2wfLy 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gSh4r2wfLy 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gSh4r2wfLy 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gSh4r2wfLy 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gSh4r2wfLy 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3038680 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3038680 /var/tmp/bdevperf.sock 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3038680 ']' 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.910 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.910 [2024-12-06 17:55:11.603550] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:23.910 [2024-12-06 17:55:11.603607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038680 ] 00:18:23.910 [2024-12-06 17:55:11.668984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.910 [2024-12-06 17:55:11.697049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.170 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.170 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.170 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:24.170 [2024-12-06 17:55:11.907136] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gSh4r2wfLy': 0100666 00:18:24.170 [2024-12-06 17:55:11.907163] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:24.170 request: 00:18:24.170 { 00:18:24.170 "name": "key0", 00:18:24.170 "path": "/tmp/tmp.gSh4r2wfLy", 00:18:24.170 "method": "keyring_file_add_key", 00:18:24.170 "req_id": 1 00:18:24.170 } 00:18:24.170 Got JSON-RPC error response 00:18:24.170 response: 00:18:24.170 { 00:18:24.170 "code": -1, 00:18:24.170 "message": "Operation not permitted" 00:18:24.170 } 00:18:24.170 17:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.429 [2024-12-06 17:55:12.067607] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.430 [2024-12-06 17:55:12.067632] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:24.430 request: 00:18:24.430 { 00:18:24.430 "name": "TLSTEST", 00:18:24.430 "trtype": "tcp", 00:18:24.430 "traddr": "10.0.0.2", 00:18:24.430 "adrfam": "ipv4", 00:18:24.430 "trsvcid": "4420", 00:18:24.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.430 "prchk_reftag": false, 00:18:24.430 "prchk_guard": false, 00:18:24.430 "hdgst": false, 00:18:24.430 "ddgst": false, 00:18:24.430 "psk": "key0", 00:18:24.430 "allow_unrecognized_csi": false, 00:18:24.430 "method": "bdev_nvme_attach_controller", 00:18:24.430 "req_id": 1 00:18:24.430 } 00:18:24.430 Got JSON-RPC error response 00:18:24.430 response: 00:18:24.430 { 00:18:24.430 "code": -126, 00:18:24.430 "message": "Required key not available" 00:18:24.430 } 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3038680 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3038680 ']' 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3038680 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3038680 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3038680' 00:18:24.430 killing process with pid 3038680 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3038680 00:18:24.430 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.430 00:18:24.430 Latency(us) 00:18:24.430 [2024-12-06T16:55:12.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.430 [2024-12-06T16:55:12.257Z] =================================================================================================================== 00:18:24.430 [2024-12-06T16:55:12.257Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3038680 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3035981 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3035981 ']' 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3035981 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.430 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035981 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035981' 00:18:24.689 killing process with pid 3035981 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3035981 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3035981 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3039019 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3039019 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3039019 ']' 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.689 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.689 [2024-12-06 17:55:12.423607] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:24.689 [2024-12-06 17:55:12.423660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.689 [2024-12-06 17:55:12.495623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.950 [2024-12-06 17:55:12.522836] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.950 [2024-12-06 17:55:12.522867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.950 [2024-12-06 17:55:12.522873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.950 [2024-12-06 17:55:12.522878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.950 [2024-12-06 17:55:12.522882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.950 [2024-12-06 17:55:12.523386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.gSh4r2wfLy 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gSh4r2wfLy 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.gSh4r2wfLy 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gSh4r2wfLy 00:18:24.950 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:24.950 [2024-12-06 17:55:12.766809] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.210 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:25.210 17:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.470 [2024-12-06 17:55:13.079571] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.470 [2024-12-06 17:55:13.079771] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.470 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.470 malloc0 00:18:25.470 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.729 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:25.729 [2024-12-06 17:55:13.550385] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gSh4r2wfLy': 0100666 00:18:25.729 [2024-12-06 17:55:13.550405] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:25.729 request: 00:18:25.729 { 00:18:25.729 "name": "key0", 00:18:25.729 "path": "/tmp/tmp.gSh4r2wfLy", 00:18:25.729 "method": "keyring_file_add_key", 00:18:25.729 "req_id": 1 00:18:25.729 } 00:18:25.729 Got JSON-RPC error response 00:18:25.729 response: 00:18:25.729 { 00:18:25.729 "code": -1, 00:18:25.729 "message": "Operation not permitted" 00:18:25.729 } 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.989 [2024-12-06 17:55:13.706790] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:25.989 [2024-12-06 17:55:13.706814] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:25.989 request: 00:18:25.989 { 00:18:25.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.989 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.989 "psk": "key0", 00:18:25.989 "method": "nvmf_subsystem_add_host", 00:18:25.989 "req_id": 1 00:18:25.989 } 00:18:25.989 Got JSON-RPC error response 00:18:25.989 response: 00:18:25.989 { 00:18:25.989 "code": -32603, 00:18:25.989 "message": "Internal error" 00:18:25.989 } 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3039019 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3039019 ']' 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3039019 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039019 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039019' 00:18:25.989 killing process with pid 3039019 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3039019 00:18:25.989 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3039019 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.gSh4r2wfLy 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3039383 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3039383 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3039383 ']' 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.250 17:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:26.250 [2024-12-06 17:55:13.923328] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:26.250 [2024-12-06 17:55:13.923379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.250 [2024-12-06 17:55:13.994711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.250 [2024-12-06 17:55:14.022386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.250 [2024-12-06 17:55:14.022418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.250 [2024-12-06 17:55:14.022426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.250 [2024-12-06 17:55:14.022431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.250 [2024-12-06 17:55:14.022437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.250 [2024-12-06 17:55:14.022876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.gSh4r2wfLy 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gSh4r2wfLy 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:26.510 [2024-12-06 17:55:14.262249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.510 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.769 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.769 [2024-12-06 17:55:14.575014] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.769 [2024-12-06 17:55:14.575222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.770 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:27.029 malloc0 00:18:27.029 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:27.289 17:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:27.289 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3039743 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3039743 /var/tmp/bdevperf.sock 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3039743 ']' 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.548 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.548 [2024-12-06 17:55:15.251975] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:27.548 [2024-12-06 17:55:15.252027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039743 ] 00:18:27.548 [2024-12-06 17:55:15.317300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.548 [2024-12-06 17:55:15.346284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.808 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.808 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.808 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:27.808 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.068 [2024-12-06 17:55:15.716687] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.068 TLSTESTn1 00:18:28.068 17:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:28.327 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:28.327 "subsystems": [ 00:18:28.327 { 00:18:28.327 "subsystem": "keyring", 00:18:28.327 "config": [ 00:18:28.327 { 00:18:28.327 "method": "keyring_file_add_key", 00:18:28.327 "params": { 00:18:28.327 "name": "key0", 00:18:28.327 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:28.327 } 00:18:28.327 } 00:18:28.327 ] 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "subsystem": "iobuf", 00:18:28.327 "config": [ 00:18:28.327 { 00:18:28.327 "method": "iobuf_set_options", 00:18:28.327 "params": { 00:18:28.327 "small_pool_count": 8192, 00:18:28.327 "large_pool_count": 1024, 00:18:28.327 "small_bufsize": 8192, 00:18:28.327 "large_bufsize": 135168, 00:18:28.327 "enable_numa": false 00:18:28.327 } 00:18:28.327 } 00:18:28.327 ] 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "subsystem": "sock", 00:18:28.327 "config": [ 00:18:28.327 { 00:18:28.327 "method": "sock_set_default_impl", 00:18:28.327 "params": { 00:18:28.327 "impl_name": "posix" 00:18:28.327 } 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "method": "sock_impl_set_options", 00:18:28.327 "params": { 00:18:28.327 "impl_name": "ssl", 00:18:28.327 "recv_buf_size": 4096, 00:18:28.327 "send_buf_size": 4096, 00:18:28.327 "enable_recv_pipe": true, 00:18:28.327 "enable_quickack": false, 00:18:28.327 "enable_placement_id": 0, 00:18:28.327 "enable_zerocopy_send_server": true, 00:18:28.327 "enable_zerocopy_send_client": false, 00:18:28.327 "zerocopy_threshold": 0, 00:18:28.327 "tls_version": 0, 00:18:28.327 "enable_ktls": false 00:18:28.327 } 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "method": "sock_impl_set_options", 00:18:28.327 "params": { 00:18:28.327 "impl_name": "posix", 00:18:28.327 "recv_buf_size": 2097152, 00:18:28.327 "send_buf_size": 2097152, 00:18:28.327 "enable_recv_pipe": true, 00:18:28.327 "enable_quickack": false, 00:18:28.327 "enable_placement_id": 0, 00:18:28.327 "enable_zerocopy_send_server": true, 00:18:28.327 "enable_zerocopy_send_client": false, 00:18:28.327 "zerocopy_threshold": 0, 00:18:28.327 "tls_version": 0, 00:18:28.327 "enable_ktls": false 00:18:28.327 } 00:18:28.327 } 00:18:28.327 ] 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "subsystem": "vmd", 00:18:28.327 "config": [] 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "subsystem": "accel", 00:18:28.327 "config": [ 00:18:28.327 { 00:18:28.327 "method": "accel_set_options", 00:18:28.327 "params": { 00:18:28.327 "small_cache_size": 128, 00:18:28.327 "large_cache_size": 16, 00:18:28.327 "task_count": 2048, 00:18:28.327 "sequence_count": 2048, 00:18:28.327 "buf_count": 2048 00:18:28.327 } 00:18:28.327 } 00:18:28.327 ] 00:18:28.327 }, 00:18:28.327 { 00:18:28.327 "subsystem": "bdev", 00:18:28.327 "config": [ 00:18:28.327 { 00:18:28.327 "method": "bdev_set_options", 00:18:28.327 "params": { 00:18:28.327 "bdev_io_pool_size": 65535, 00:18:28.327 "bdev_io_cache_size": 256, 00:18:28.327 "bdev_auto_examine": true, 00:18:28.327 "iobuf_small_cache_size": 128, 00:18:28.327 "iobuf_large_cache_size": 16 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "bdev_raid_set_options", 00:18:28.328 "params": { 00:18:28.328 "process_window_size_kb": 1024, 00:18:28.328 "process_max_bandwidth_mb_sec": 0 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "bdev_iscsi_set_options", 00:18:28.328 "params": { 00:18:28.328 "timeout_sec": 30 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "bdev_nvme_set_options", 00:18:28.328 "params": { 00:18:28.328 "action_on_timeout": "none", 00:18:28.328 "timeout_us": 0, 00:18:28.328 "timeout_admin_us": 0, 00:18:28.328 "keep_alive_timeout_ms": 10000, 00:18:28.328 "arbitration_burst": 0, 00:18:28.328 "low_priority_weight": 0, 00:18:28.328 "medium_priority_weight": 0, 00:18:28.328 "high_priority_weight": 0, 00:18:28.328 "nvme_adminq_poll_period_us": 10000, 00:18:28.328 "nvme_ioq_poll_period_us": 0, 00:18:28.328 "io_queue_requests": 0, 00:18:28.328 "delay_cmd_submit": true, 00:18:28.328 "transport_retry_count": 4, 00:18:28.328 "bdev_retry_count": 3, 00:18:28.328 "transport_ack_timeout": 0, 00:18:28.328 "ctrlr_loss_timeout_sec": 0, 00:18:28.328 "reconnect_delay_sec": 0, 00:18:28.328 "fast_io_fail_timeout_sec": 0, 00:18:28.328 "disable_auto_failback": false, 00:18:28.328 "generate_uuids": false, 00:18:28.328 "transport_tos": 0, 00:18:28.328 "nvme_error_stat": false, 00:18:28.328 "rdma_srq_size": 0, 00:18:28.328 "io_path_stat": false, 00:18:28.328 "allow_accel_sequence": false, 00:18:28.328 "rdma_max_cq_size": 0, 00:18:28.328 "rdma_cm_event_timeout_ms": 0, 00:18:28.328 "dhchap_digests": [ 00:18:28.328 "sha256", 00:18:28.328 "sha384", 00:18:28.328 "sha512" 00:18:28.328 ], 00:18:28.328 "dhchap_dhgroups": [ 00:18:28.328 "null", 00:18:28.328 "ffdhe2048", 00:18:28.328 "ffdhe3072", 00:18:28.328 "ffdhe4096", 00:18:28.328 "ffdhe6144", 00:18:28.328 "ffdhe8192" 00:18:28.328 ], 00:18:28.328 "rdma_umr_per_io": false 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "bdev_nvme_set_hotplug", 00:18:28.328 "params": { 00:18:28.328 "period_us": 100000, 00:18:28.328 "enable": false 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "bdev_malloc_create", 00:18:28.328 "params": { 00:18:28.328 "name": "malloc0", 00:18:28.328 "num_blocks": 8192, 00:18:28.328 "block_size": 4096, 00:18:28.328 "physical_block_size": 4096, 00:18:28.328 "uuid": "806cbc98-82c5-4234-a2ef-1d2da50bf4a1", 00:18:28.328 "optimal_io_boundary": 0, 00:18:28.328 "md_size": 0, 00:18:28.328 "dif_type": 0, 00:18:28.328 "dif_is_head_of_md": false, 00:18:28.328 "dif_pi_format": 0 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "bdev_wait_for_examine" 00:18:28.328 } 00:18:28.328 ] 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "subsystem": "nbd", 00:18:28.328 "config": [] 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "subsystem": "scheduler", 00:18:28.328 "config": [ 00:18:28.328 { 00:18:28.328 "method": "framework_set_scheduler", 00:18:28.328 "params": { 00:18:28.328 "name": "static" 00:18:28.328 } 00:18:28.328 } 00:18:28.328 ] 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "subsystem": "nvmf", 00:18:28.328 "config": [ 00:18:28.328 { 00:18:28.328 "method": "nvmf_set_config", 00:18:28.328 "params": { 00:18:28.328 "discovery_filter": "match_any", 00:18:28.328 "admin_cmd_passthru": { 00:18:28.328 "identify_ctrlr": false 00:18:28.328 }, 00:18:28.328 "dhchap_digests": [ 00:18:28.328 "sha256", 00:18:28.328 "sha384", 00:18:28.328 "sha512" 00:18:28.328 ], 00:18:28.328 "dhchap_dhgroups": [ 00:18:28.328 "null", 00:18:28.328 "ffdhe2048", 00:18:28.328 "ffdhe3072", 00:18:28.328 "ffdhe4096", 00:18:28.328 "ffdhe6144", 00:18:28.328 "ffdhe8192" 00:18:28.328 ] 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_set_max_subsystems", 00:18:28.328 "params": { 00:18:28.328 "max_subsystems": 1024 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_set_crdt", 00:18:28.328 "params": { 00:18:28.328 "crdt1": 0, 00:18:28.328 "crdt2": 0, 00:18:28.328 "crdt3": 0 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_create_transport", 00:18:28.328 "params": { 00:18:28.328 "trtype": "TCP", 00:18:28.328 "max_queue_depth": 128, 00:18:28.328 "max_io_qpairs_per_ctrlr": 127, 00:18:28.328 "in_capsule_data_size": 4096, 00:18:28.328 "max_io_size": 131072, 00:18:28.328 "io_unit_size": 131072, 00:18:28.328 "max_aq_depth": 128, 00:18:28.328 "num_shared_buffers": 511, 00:18:28.328 "buf_cache_size": 4294967295, 00:18:28.328 "dif_insert_or_strip": false, 00:18:28.328 "zcopy": false, 00:18:28.328 "c2h_success": false, 00:18:28.328 "sock_priority": 0, 00:18:28.328 "abort_timeout_sec": 1, 00:18:28.328 "ack_timeout": 0, 00:18:28.328 "data_wr_pool_size": 0 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_create_subsystem", 00:18:28.328 "params": { 00:18:28.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.328 "allow_any_host": false, 00:18:28.328 "serial_number": "SPDK00000000000001", 00:18:28.328 "model_number": "SPDK bdev Controller", 00:18:28.328 "max_namespaces": 10, 00:18:28.328 "min_cntlid": 1, 00:18:28.328 "max_cntlid": 65519, 00:18:28.328 "ana_reporting": false 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_subsystem_add_host", 00:18:28.328 "params": { 00:18:28.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.328 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.328 "psk": "key0" 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_subsystem_add_ns", 00:18:28.328 "params": { 00:18:28.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.328 "namespace": { 00:18:28.328 "nsid": 1, 00:18:28.328 "bdev_name": "malloc0", 00:18:28.328 "nguid": "806CBC9882C54234A2EF1D2DA50BF4A1", 00:18:28.328 "uuid": "806cbc98-82c5-4234-a2ef-1d2da50bf4a1", 00:18:28.328 "no_auto_visible": false 00:18:28.328 } 00:18:28.328 } 00:18:28.328 }, 00:18:28.328 { 00:18:28.328 "method": "nvmf_subsystem_add_listener", 00:18:28.328 "params": { 00:18:28.328 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.328 "listen_address": { 00:18:28.328 "trtype": "TCP", 00:18:28.328 "adrfam": "IPv4", 00:18:28.328 "traddr": "10.0.0.2", 00:18:28.328 "trsvcid": "4420" 00:18:28.328 }, 00:18:28.328 "secure_channel": true 00:18:28.328 } 00:18:28.328 } 00:18:28.328 ] 00:18:28.328 } 00:18:28.328 ] 00:18:28.328 }' 00:18:28.328 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:28.589 "subsystems": [ 00:18:28.589 { 00:18:28.589 "subsystem": "keyring", 00:18:28.589 "config": [ 00:18:28.589 { 00:18:28.589 "method": "keyring_file_add_key", 00:18:28.589 "params": { 00:18:28.589 "name": "key0", 00:18:28.589 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:28.589 } 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "subsystem": "iobuf", 00:18:28.589 "config": [ 00:18:28.589 { 00:18:28.589 "method": "iobuf_set_options", 00:18:28.589 "params": { 00:18:28.589 "small_pool_count": 8192, 00:18:28.589 "large_pool_count": 1024, 00:18:28.589 "small_bufsize": 8192, 00:18:28.589 "large_bufsize": 135168, 00:18:28.589 "enable_numa": false 00:18:28.589 } 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "subsystem": "sock", 00:18:28.589 "config": [ 00:18:28.589 { 00:18:28.589 "method": "sock_set_default_impl", 00:18:28.589 "params": { 00:18:28.589 "impl_name": "posix" 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "sock_impl_set_options", 00:18:28.589 "params": { 00:18:28.589 "impl_name": "ssl", 00:18:28.589 "recv_buf_size": 4096, 00:18:28.589 "send_buf_size": 4096, 00:18:28.589 "enable_recv_pipe": true, 00:18:28.589 "enable_quickack": false, 00:18:28.589 "enable_placement_id": 0, 00:18:28.589 "enable_zerocopy_send_server": true, 00:18:28.589 "enable_zerocopy_send_client": false, 00:18:28.589 "zerocopy_threshold": 0, 00:18:28.589 "tls_version": 0, 00:18:28.589 "enable_ktls": false 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "sock_impl_set_options", 00:18:28.589 "params": { 00:18:28.589 "impl_name": "posix", 00:18:28.589 "recv_buf_size": 2097152, 00:18:28.589 "send_buf_size": 2097152, 00:18:28.589 "enable_recv_pipe": true, 00:18:28.589 "enable_quickack": false, 00:18:28.589 "enable_placement_id": 0, 00:18:28.589 "enable_zerocopy_send_server": true, 00:18:28.589 "enable_zerocopy_send_client": false, 00:18:28.589 "zerocopy_threshold": 0, 00:18:28.589 "tls_version": 0, 00:18:28.589 "enable_ktls": false 00:18:28.589 } 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "subsystem": "vmd", 00:18:28.589 "config": [] 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "subsystem": "accel", 00:18:28.589 "config": [ 00:18:28.589 { 00:18:28.589 "method": "accel_set_options", 00:18:28.589 "params": { 00:18:28.589 "small_cache_size": 128, 00:18:28.589 "large_cache_size": 16, 00:18:28.589 "task_count": 2048, 00:18:28.589 "sequence_count": 2048, 00:18:28.589 "buf_count": 2048 00:18:28.589 } 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "subsystem": "bdev", 00:18:28.589 "config": [ 00:18:28.589 { 00:18:28.589 "method": "bdev_set_options", 00:18:28.589 "params": { 00:18:28.589 "bdev_io_pool_size": 65535, 00:18:28.589 "bdev_io_cache_size": 256, 00:18:28.589 "bdev_auto_examine": true, 00:18:28.589 "iobuf_small_cache_size": 128, 00:18:28.589 "iobuf_large_cache_size": 16 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "bdev_raid_set_options", 00:18:28.589 "params": { 00:18:28.589 "process_window_size_kb": 1024, 00:18:28.589 "process_max_bandwidth_mb_sec": 0 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "bdev_iscsi_set_options", 00:18:28.589 "params": { 00:18:28.589 "timeout_sec": 30 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "bdev_nvme_set_options", 00:18:28.589 "params": { 00:18:28.589 "action_on_timeout": "none", 00:18:28.589 "timeout_us": 0, 00:18:28.589 "timeout_admin_us": 0, 00:18:28.589 "keep_alive_timeout_ms": 10000, 00:18:28.589 "arbitration_burst": 0, 00:18:28.589 "low_priority_weight": 0, 00:18:28.589 "medium_priority_weight": 0, 00:18:28.589 "high_priority_weight": 0, 00:18:28.589 "nvme_adminq_poll_period_us": 10000, 00:18:28.589 "nvme_ioq_poll_period_us": 0, 00:18:28.589 "io_queue_requests": 512, 00:18:28.589 "delay_cmd_submit": true, 00:18:28.589 "transport_retry_count": 4, 00:18:28.589 "bdev_retry_count": 3, 00:18:28.589 "transport_ack_timeout": 0, 00:18:28.589 "ctrlr_loss_timeout_sec": 0, 00:18:28.589 "reconnect_delay_sec": 0, 00:18:28.589 "fast_io_fail_timeout_sec": 0, 00:18:28.589 "disable_auto_failback": false, 00:18:28.589 "generate_uuids": false, 00:18:28.589 "transport_tos": 0, 00:18:28.589 "nvme_error_stat": false, 00:18:28.589 "rdma_srq_size": 0, 00:18:28.589 "io_path_stat": false, 00:18:28.589 "allow_accel_sequence": false, 00:18:28.589 "rdma_max_cq_size": 0, 00:18:28.589 "rdma_cm_event_timeout_ms": 0, 00:18:28.589 "dhchap_digests": [ 00:18:28.589 "sha256", 00:18:28.589 "sha384", 00:18:28.589 "sha512" 00:18:28.589 ], 00:18:28.589 "dhchap_dhgroups": [ 00:18:28.589 "null", 00:18:28.589 "ffdhe2048", 00:18:28.589 "ffdhe3072", 00:18:28.589 "ffdhe4096", 00:18:28.589 "ffdhe6144", 00:18:28.589 "ffdhe8192" 00:18:28.589 ], 00:18:28.589 "rdma_umr_per_io": false 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "bdev_nvme_attach_controller", 00:18:28.589 "params": { 00:18:28.589 "name": "TLSTEST", 00:18:28.589 "trtype": "TCP", 00:18:28.589 "adrfam": "IPv4", 00:18:28.589 "traddr": "10.0.0.2", 00:18:28.589 "trsvcid": "4420", 00:18:28.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.589 "prchk_reftag": false, 00:18:28.589 "prchk_guard": false, 00:18:28.589 "ctrlr_loss_timeout_sec": 0, 00:18:28.589 "reconnect_delay_sec": 0, 00:18:28.589 "fast_io_fail_timeout_sec": 0, 00:18:28.589 "psk": "key0", 00:18:28.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.589 "hdgst": false, 00:18:28.589 "ddgst": false, 00:18:28.589 "multipath": "multipath" 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "bdev_nvme_set_hotplug", 00:18:28.589 "params": { 00:18:28.589 "period_us": 100000, 00:18:28.589 "enable": false 00:18:28.589 } 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "method": "bdev_wait_for_examine" 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }, 00:18:28.589 { 00:18:28.589 "subsystem": "nbd", 00:18:28.589 "config": [] 00:18:28.589 } 00:18:28.589 ] 00:18:28.589 }' 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3039743 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3039743 ']' 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3039743 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039743 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039743' 00:18:28.589 killing process with pid 3039743 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3039743 00:18:28.589 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.589 00:18:28.589 Latency(us) 00:18:28.589 [2024-12-06T16:55:16.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.589 [2024-12-06T16:55:16.416Z] =================================================================================================================== 00:18:28.589 [2024-12-06T16:55:16.416Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.589 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3039743 00:18:28.590 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3039383 00:18:28.590 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3039383 ']' 00:18:28.590 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3039383 00:18:28.590 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.590 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.590 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039383 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039383' 00:18:28.851 killing process with pid 3039383 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3039383 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3039383 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.851 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:28.851 "subsystems": [ 00:18:28.851 { 00:18:28.851 "subsystem": "keyring", 00:18:28.851 "config": [ 00:18:28.851 { 00:18:28.851 "method": "keyring_file_add_key", 00:18:28.851 "params": { 00:18:28.851 "name": "key0", 00:18:28.851 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:28.851 } 00:18:28.851 } 00:18:28.851 ] 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "subsystem": "iobuf", 00:18:28.851 "config": [ 00:18:28.851 { 00:18:28.851 "method": "iobuf_set_options", 00:18:28.851 "params": { 00:18:28.851 "small_pool_count": 8192, 00:18:28.851 "large_pool_count": 1024, 00:18:28.851 "small_bufsize": 8192, 00:18:28.851 "large_bufsize": 135168, 00:18:28.851 "enable_numa": false 00:18:28.851 } 00:18:28.851 } 00:18:28.851 ] 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "subsystem": "sock", 00:18:28.851 "config": [ 00:18:28.851 { 00:18:28.851 "method": "sock_set_default_impl", 00:18:28.851 "params": { 00:18:28.851 "impl_name": "posix" 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "sock_impl_set_options", 00:18:28.851 "params": { 00:18:28.851 "impl_name": "ssl", 00:18:28.851 "recv_buf_size": 4096, 00:18:28.851 "send_buf_size": 4096, 00:18:28.851 "enable_recv_pipe": true, 00:18:28.851 "enable_quickack": false, 00:18:28.851 "enable_placement_id": 0, 00:18:28.851 "enable_zerocopy_send_server": true, 00:18:28.851 "enable_zerocopy_send_client": false, 00:18:28.851 "zerocopy_threshold": 0, 00:18:28.851 "tls_version": 0, 00:18:28.851 "enable_ktls": false 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "sock_impl_set_options", 00:18:28.851 "params": { 00:18:28.851 "impl_name": "posix", 00:18:28.851 "recv_buf_size": 2097152, 00:18:28.851 "send_buf_size": 2097152, 00:18:28.851 "enable_recv_pipe": true, 00:18:28.851 "enable_quickack": false, 00:18:28.851 "enable_placement_id": 0, 00:18:28.851 "enable_zerocopy_send_server": true, 00:18:28.851 "enable_zerocopy_send_client": false, 00:18:28.851 "zerocopy_threshold": 0, 00:18:28.851 "tls_version": 0, 00:18:28.851 "enable_ktls": false 00:18:28.851 } 00:18:28.851 } 00:18:28.851 ] 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "subsystem": "vmd", 00:18:28.851 "config": [] 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "subsystem": "accel", 00:18:28.851 "config": [ 00:18:28.851 { 00:18:28.851 "method": "accel_set_options", 00:18:28.851 "params": { 00:18:28.851 "small_cache_size": 128, 00:18:28.851 "large_cache_size": 16, 00:18:28.851 "task_count": 2048, 00:18:28.851 "sequence_count": 2048, 00:18:28.851 "buf_count": 2048 00:18:28.851 } 00:18:28.851 } 00:18:28.851 ] 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "subsystem": "bdev", 00:18:28.851 "config": [ 00:18:28.851 { 00:18:28.851 "method": "bdev_set_options", 00:18:28.851 "params": { 00:18:28.851 "bdev_io_pool_size": 65535, 00:18:28.851 "bdev_io_cache_size": 256, 00:18:28.851 "bdev_auto_examine": true, 00:18:28.851 "iobuf_small_cache_size": 128, 00:18:28.851 "iobuf_large_cache_size": 16 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "bdev_raid_set_options", 00:18:28.851 "params": { 00:18:28.851 "process_window_size_kb": 1024, 00:18:28.851 "process_max_bandwidth_mb_sec": 0 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "bdev_iscsi_set_options", 00:18:28.851 "params": { 00:18:28.851 "timeout_sec": 30 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "bdev_nvme_set_options", 00:18:28.851 "params": { 00:18:28.851 "action_on_timeout": "none", 00:18:28.851 "timeout_us": 0, 00:18:28.851 "timeout_admin_us": 0, 00:18:28.851 "keep_alive_timeout_ms": 10000, 00:18:28.851 "arbitration_burst": 0, 00:18:28.851 "low_priority_weight": 0, 00:18:28.851 "medium_priority_weight": 0, 00:18:28.851 "high_priority_weight": 0, 00:18:28.851 "nvme_adminq_poll_period_us": 10000, 00:18:28.851 "nvme_ioq_poll_period_us": 0, 00:18:28.851 "io_queue_requests": 0, 00:18:28.851 "delay_cmd_submit": true, 00:18:28.851 "transport_retry_count": 4, 00:18:28.851 "bdev_retry_count": 3, 00:18:28.851 "transport_ack_timeout": 0, 00:18:28.851 "ctrlr_loss_timeout_sec": 0, 00:18:28.851 "reconnect_delay_sec": 0, 00:18:28.851 "fast_io_fail_timeout_sec": 0, 00:18:28.851 "disable_auto_failback": false, 00:18:28.851 "generate_uuids": false, 00:18:28.851 "transport_tos": 0, 00:18:28.851 "nvme_error_stat": false, 00:18:28.851 "rdma_srq_size": 0, 00:18:28.851 "io_path_stat": false, 00:18:28.851 "allow_accel_sequence": false, 00:18:28.851 "rdma_max_cq_size": 0, 00:18:28.851 "rdma_cm_event_timeout_ms": 0, 00:18:28.851 "dhchap_digests": [ 00:18:28.851 "sha256", 00:18:28.851 "sha384", 00:18:28.851 "sha512" 00:18:28.851 ], 00:18:28.851 "dhchap_dhgroups": [ 00:18:28.851 "null", 00:18:28.851 "ffdhe2048", 00:18:28.851 "ffdhe3072", 00:18:28.851 "ffdhe4096", 00:18:28.851 "ffdhe6144", 00:18:28.851 "ffdhe8192" 00:18:28.851 ], 00:18:28.851 "rdma_umr_per_io": false 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "bdev_nvme_set_hotplug", 00:18:28.851 "params": { 00:18:28.851 "period_us": 100000, 00:18:28.851 "enable": false 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "bdev_malloc_create", 00:18:28.851 "params": { 00:18:28.851 "name": "malloc0", 00:18:28.851 "num_blocks": 8192, 00:18:28.851 "block_size": 4096, 00:18:28.851 "physical_block_size": 4096, 00:18:28.851 "uuid": "806cbc98-82c5-4234-a2ef-1d2da50bf4a1", 00:18:28.851 "optimal_io_boundary": 0, 00:18:28.851 "md_size": 0, 00:18:28.851 "dif_type": 0, 00:18:28.851 "dif_is_head_of_md": false, 00:18:28.851 "dif_pi_format": 0 00:18:28.851 } 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "method": "bdev_wait_for_examine" 00:18:28.851 } 00:18:28.851 ] 00:18:28.851 }, 00:18:28.851 { 00:18:28.851 "subsystem": "nbd", 00:18:28.851 "config": [] 00:18:28.851 }, 00:18:28.851 { 00:18:28.852 "subsystem": "scheduler", 00:18:28.852 "config": [ 00:18:28.852 { 00:18:28.852 "method": "framework_set_scheduler", 00:18:28.852 "params": { 00:18:28.852 "name": "static" 00:18:28.852 } 00:18:28.852 } 00:18:28.852 ] 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "subsystem": "nvmf", 00:18:28.852 "config": [ 00:18:28.852 { 00:18:28.852 "method": "nvmf_set_config", 00:18:28.852 "params": { 00:18:28.852 "discovery_filter": "match_any", 00:18:28.852 "admin_cmd_passthru": { 00:18:28.852 "identify_ctrlr": false 00:18:28.852 }, 00:18:28.852 "dhchap_digests": [ 00:18:28.852 "sha256", 00:18:28.852 "sha384", 00:18:28.852 "sha512" 00:18:28.852 ], 00:18:28.852 "dhchap_dhgroups": [ 00:18:28.852 "null", 00:18:28.852 "ffdhe2048", 00:18:28.852 "ffdhe3072", 00:18:28.852 "ffdhe4096", 00:18:28.852 "ffdhe6144", 00:18:28.852 "ffdhe8192" 00:18:28.852 ] 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_set_max_subsystems", 00:18:28.852 "params": { 00:18:28.852 "max_subsystems": 1024 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_set_crdt", 00:18:28.852 "params": { 00:18:28.852 "crdt1": 0, 00:18:28.852 "crdt2": 0, 00:18:28.852 "crdt3": 0 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_create_transport", 00:18:28.852 "params": { 00:18:28.852 "trtype": "TCP", 00:18:28.852 "max_queue_depth": 128, 00:18:28.852 "max_io_qpairs_per_ctrlr": 127, 00:18:28.852 "in_capsule_data_size": 4096, 00:18:28.852 "max_io_size": 131072, 00:18:28.852 "io_unit_size": 131072, 00:18:28.852 "max_aq_depth": 128, 00:18:28.852 "num_shared_buffers": 511, 00:18:28.852 "buf_cache_size": 4294967295, 00:18:28.852 "dif_insert_or_strip": false, 00:18:28.852 "zcopy": false, 00:18:28.852 "c2h_success": false, 00:18:28.852 "sock_priority": 0, 00:18:28.852 "abort_timeout_sec": 1, 00:18:28.852 "ack_timeout": 0, 00:18:28.852 "data_wr_pool_size": 0 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_create_subsystem", 00:18:28.852 "params": { 00:18:28.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.852 "allow_any_host": false, 00:18:28.852 "serial_number": "SPDK00000000000001", 00:18:28.852 "model_number": "SPDK bdev Controller", 00:18:28.852 "max_namespaces": 10, 00:18:28.852 "min_cntlid": 1, 00:18:28.852 "max_cntlid": 65519, 00:18:28.852 "ana_reporting": false 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_subsystem_add_host", 00:18:28.852 "params": { 00:18:28.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.852 "host": "nqn.2016-06.io.spdk:host1", 00:18:28.852 "psk": "key0" 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_subsystem_add_ns", 00:18:28.852 "params": { 00:18:28.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.852 "namespace": { 00:18:28.852 "nsid": 1, 00:18:28.852 "bdev_name": "malloc0", 00:18:28.852 "nguid": "806CBC9882C54234A2EF1D2DA50BF4A1", 00:18:28.852 "uuid": "806cbc98-82c5-4234-a2ef-1d2da50bf4a1", 00:18:28.852 "no_auto_visible": false 00:18:28.852 } 00:18:28.852 } 00:18:28.852 }, 00:18:28.852 { 00:18:28.852 "method": "nvmf_subsystem_add_listener", 00:18:28.852 "params": { 00:18:28.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.852 "listen_address": { 00:18:28.852 "trtype": "TCP", 00:18:28.852 "adrfam": "IPv4", 00:18:28.852 "traddr": "10.0.0.2", 00:18:28.852 "trsvcid": "4420" 00:18:28.852 }, 00:18:28.852 "secure_channel": true 00:18:28.852 } 00:18:28.852 } 00:18:28.852 ] 00:18:28.852 } 00:18:28.852 ] 00:18:28.852 }' 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3040049 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3040049 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3040049 ']' 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.852 17:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:28.852 [2024-12-06 17:55:16.604073] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:28.852 [2024-12-06 17:55:16.604136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.852 [2024-12-06 17:55:16.675369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.111 [2024-12-06 17:55:16.703700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.111 [2024-12-06 17:55:16.703730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.111 [2024-12-06 17:55:16.703735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.111 [2024-12-06 17:55:16.703740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.111 [2024-12-06 17:55:16.703745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.111 [2024-12-06 17:55:16.704247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.111 [2024-12-06 17:55:16.898196] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.111 [2024-12-06 17:55:16.930218] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.111 [2024-12-06 17:55:16.930413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3040123 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3040123 /var/tmp/bdevperf.sock 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3040123 ']' 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:29.679 17:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:29.679 "subsystems": [ 00:18:29.679 { 00:18:29.679 "subsystem": "keyring", 00:18:29.679 "config": [ 00:18:29.679 { 00:18:29.679 "method": "keyring_file_add_key", 00:18:29.679 "params": { 00:18:29.679 "name": "key0", 00:18:29.679 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:29.679 } 00:18:29.679 } 00:18:29.679 ] 00:18:29.679 }, 00:18:29.679 { 00:18:29.679 "subsystem": "iobuf", 00:18:29.679 "config": [ 00:18:29.679 { 00:18:29.679 "method": "iobuf_set_options", 00:18:29.679 "params": { 00:18:29.679 "small_pool_count": 8192, 00:18:29.679 "large_pool_count": 1024, 00:18:29.679 "small_bufsize": 8192, 00:18:29.679 "large_bufsize": 135168, 00:18:29.679 "enable_numa": false 00:18:29.679 } 00:18:29.679 } 00:18:29.679 ] 00:18:29.679 }, 00:18:29.679 { 00:18:29.679 "subsystem": "sock", 00:18:29.679 "config": [ 00:18:29.679 { 00:18:29.679 "method": "sock_set_default_impl", 00:18:29.679 "params": { 00:18:29.679 "impl_name": "posix" 00:18:29.679 } 00:18:29.679 }, 00:18:29.679 { 00:18:29.679 "method": "sock_impl_set_options", 00:18:29.679 "params": { 00:18:29.679 "impl_name": "ssl", 00:18:29.679 "recv_buf_size": 4096, 00:18:29.679 "send_buf_size": 4096, 00:18:29.679 "enable_recv_pipe": true, 00:18:29.679 "enable_quickack": false, 00:18:29.679 "enable_placement_id": 0, 00:18:29.679 "enable_zerocopy_send_server": true, 00:18:29.679 "enable_zerocopy_send_client": false, 00:18:29.679 "zerocopy_threshold": 0, 00:18:29.679 "tls_version": 0, 00:18:29.679 "enable_ktls": false 00:18:29.679 } 00:18:29.679 }, 00:18:29.679 { 00:18:29.679 "method": "sock_impl_set_options", 00:18:29.679 "params": { 00:18:29.679 "impl_name": "posix", 00:18:29.679 "recv_buf_size": 2097152, 00:18:29.679 "send_buf_size": 2097152, 00:18:29.679 "enable_recv_pipe": true, 00:18:29.679 "enable_quickack": false, 00:18:29.679 "enable_placement_id": 0, 00:18:29.679 "enable_zerocopy_send_server": true, 00:18:29.679 "enable_zerocopy_send_client": false, 00:18:29.679 "zerocopy_threshold": 0, 00:18:29.679 "tls_version": 0, 00:18:29.679 "enable_ktls": false 00:18:29.679 } 00:18:29.679 } 00:18:29.679 ] 00:18:29.679 }, 00:18:29.679 { 00:18:29.679 "subsystem": "vmd", 00:18:29.679 "config": [] 00:18:29.679 }, 00:18:29.679 { 00:18:29.679 "subsystem": "accel", 00:18:29.680 "config": [ 00:18:29.680 { 00:18:29.680 "method": "accel_set_options", 00:18:29.680 "params": { 00:18:29.680 "small_cache_size": 128, 00:18:29.680 "large_cache_size": 16, 00:18:29.680 "task_count": 2048, 00:18:29.680 "sequence_count": 2048, 00:18:29.680 "buf_count": 2048 00:18:29.680 } 00:18:29.680 } 00:18:29.680 ] 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "subsystem": "bdev", 00:18:29.680 "config": [ 00:18:29.680 { 00:18:29.680 "method": "bdev_set_options", 00:18:29.680 "params": { 00:18:29.680 "bdev_io_pool_size": 65535, 00:18:29.680 "bdev_io_cache_size": 256, 00:18:29.680 "bdev_auto_examine": true, 00:18:29.680 "iobuf_small_cache_size": 128, 00:18:29.680 "iobuf_large_cache_size": 16 00:18:29.680 } 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "method": "bdev_raid_set_options", 00:18:29.680 "params": { 00:18:29.680 "process_window_size_kb": 1024, 00:18:29.680 "process_max_bandwidth_mb_sec": 0 00:18:29.680 } 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "method": "bdev_iscsi_set_options", 00:18:29.680 "params": { 00:18:29.680 "timeout_sec": 30 00:18:29.680 } 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "method": "bdev_nvme_set_options", 00:18:29.680 "params": { 00:18:29.680 "action_on_timeout": "none", 00:18:29.680 "timeout_us": 0, 00:18:29.680 "timeout_admin_us": 0, 00:18:29.680 "keep_alive_timeout_ms": 10000, 00:18:29.680 "arbitration_burst": 0, 00:18:29.680 "low_priority_weight": 0, 00:18:29.680 "medium_priority_weight": 0, 00:18:29.680 "high_priority_weight": 0, 00:18:29.680 "nvme_adminq_poll_period_us": 10000, 00:18:29.680 "nvme_ioq_poll_period_us": 0, 00:18:29.680 "io_queue_requests": 512, 00:18:29.680 "delay_cmd_submit": true, 00:18:29.680 "transport_retry_count": 4, 00:18:29.680 "bdev_retry_count": 3, 00:18:29.680 "transport_ack_timeout": 0, 00:18:29.680 "ctrlr_loss_timeout_sec": 0, 00:18:29.680 "reconnect_delay_sec": 0, 00:18:29.680 "fast_io_fail_timeout_sec": 0, 00:18:29.680 "disable_auto_failback": false, 00:18:29.680 "generate_uuids": false, 00:18:29.680 "transport_tos": 0, 00:18:29.680 "nvme_error_stat": false, 00:18:29.680 "rdma_srq_size": 0, 00:18:29.680 "io_path_stat": false, 00:18:29.680 "allow_accel_sequence": false, 00:18:29.680 "rdma_max_cq_size": 0, 00:18:29.680 "rdma_cm_event_timeout_ms": 0, 00:18:29.680 "dhchap_digests": [ 00:18:29.680 "sha256", 00:18:29.680 "sha384", 00:18:29.680 "sha512" 00:18:29.680 ], 00:18:29.680 "dhchap_dhgroups": [ 00:18:29.680 "null", 00:18:29.680 "ffdhe2048", 00:18:29.680 "ffdhe3072", 00:18:29.680 "ffdhe4096", 00:18:29.680 "ffdhe6144", 00:18:29.680 "ffdhe8192" 00:18:29.680 ], 00:18:29.680 "rdma_umr_per_io": false 00:18:29.680 } 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "method": "bdev_nvme_attach_controller", 00:18:29.680 "params": { 00:18:29.680 "name": "TLSTEST", 00:18:29.680 "trtype": "TCP", 00:18:29.680 "adrfam": "IPv4", 00:18:29.680 "traddr": "10.0.0.2", 00:18:29.680 "trsvcid": "4420", 00:18:29.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.680 "prchk_reftag": false, 00:18:29.680 "prchk_guard": false, 00:18:29.680 "ctrlr_loss_timeout_sec": 0, 00:18:29.680 "reconnect_delay_sec": 0, 00:18:29.680 "fast_io_fail_timeout_sec": 0, 00:18:29.680 "psk": "key0", 00:18:29.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.680 "hdgst": false, 00:18:29.680 "ddgst": false, 00:18:29.680 "multipath": "multipath" 00:18:29.680 } 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "method": "bdev_nvme_set_hotplug", 00:18:29.680 "params": { 00:18:29.680 "period_us": 100000, 00:18:29.680 "enable": false 00:18:29.680 } 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "method": "bdev_wait_for_examine" 00:18:29.680 } 00:18:29.680 ] 00:18:29.680 }, 00:18:29.680 { 00:18:29.680 "subsystem": "nbd", 00:18:29.680 "config": [] 00:18:29.680 } 00:18:29.680 ] 00:18:29.680 }' 00:18:29.680 [2024-12-06 17:55:17.432293] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:29.680 [2024-12-06 17:55:17.432341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040123 ] 00:18:29.680 [2024-12-06 17:55:17.496701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.939 [2024-12-06 17:55:17.525604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.939 [2024-12-06 17:55:17.660821] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.505 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.505 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:30.505 17:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.505 Running I/O for 10 seconds... 00:18:32.822 5638.00 IOPS, 22.02 MiB/s [2024-12-06T16:55:21.588Z] 5018.00 IOPS, 19.60 MiB/s [2024-12-06T16:55:22.527Z] 4766.33 IOPS, 18.62 MiB/s [2024-12-06T16:55:23.468Z] 4620.75 IOPS, 18.05 MiB/s [2024-12-06T16:55:24.408Z] 4801.20 IOPS, 18.75 MiB/s [2024-12-06T16:55:25.344Z] 4670.83 IOPS, 18.25 MiB/s [2024-12-06T16:55:26.741Z] 4618.00 IOPS, 18.04 MiB/s [2024-12-06T16:55:27.307Z] 4556.12 IOPS, 17.80 MiB/s [2024-12-06T16:55:28.693Z] 4643.44 IOPS, 18.14 MiB/s [2024-12-06T16:55:28.693Z] 4589.80 IOPS, 17.93 MiB/s 00:18:40.866 Latency(us) 00:18:40.866 [2024-12-06T16:55:28.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.866 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.866 Verification LBA range: start 0x0 length 0x2000 00:18:40.866 TLSTESTn1 : 10.06 4574.41 17.87 0.00 0.00 27885.53 5761.71 58545.49 00:18:40.866 [2024-12-06T16:55:28.693Z] =================================================================================================================== 00:18:40.866 [2024-12-06T16:55:28.694Z] Total : 4574.41 17.87 0.00 0.00 27885.53 5761.71 58545.49 00:18:40.867 { 00:18:40.867 "results": [ 00:18:40.867 { 00:18:40.867 "job": "TLSTESTn1", 00:18:40.867 "core_mask": "0x4", 00:18:40.867 "workload": "verify", 00:18:40.867 "status": "finished", 00:18:40.867 "verify_range": { 00:18:40.867 "start": 0, 00:18:40.867 "length": 8192 00:18:40.867 }, 00:18:40.867 "queue_depth": 128, 00:18:40.867 "io_size": 4096, 00:18:40.867 "runtime": 10.061633, 00:18:40.867 "iops": 4574.406560048454, 00:18:40.867 "mibps": 17.868775625189272, 00:18:40.867 "io_failed": 0, 00:18:40.867 "io_timeout": 0, 00:18:40.867 "avg_latency_us": 27885.53376729095, 00:18:40.867 "min_latency_us": 5761.706666666667, 00:18:40.867 "max_latency_us": 58545.49333333333 00:18:40.867 } 00:18:40.867 ], 00:18:40.867 "core_count": 1 00:18:40.867 } 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3040123 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3040123 ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3040123 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3040123 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3040123' 00:18:40.867 killing process with pid 3040123 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3040123 00:18:40.867 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.867 00:18:40.867 Latency(us) 00:18:40.867 [2024-12-06T16:55:28.694Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.867 [2024-12-06T16:55:28.694Z] =================================================================================================================== 00:18:40.867 [2024-12-06T16:55:28.694Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3040123 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3040049 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3040049 ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3040049 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3040049 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3040049' 00:18:40.867 killing process with pid 3040049 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3040049 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3040049 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3042710 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3042710 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3042710 ']' 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.867 17:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:41.127 [2024-12-06 17:55:28.721342] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:41.127 [2024-12-06 17:55:28.721398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.127 [2024-12-06 17:55:28.805992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.127 [2024-12-06 17:55:28.842024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.127 [2024-12-06 17:55:28.842059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.127 [2024-12-06 17:55:28.842069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.127 [2024-12-06 17:55:28.842077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.127 [2024-12-06 17:55:28.842084] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.127 [2024-12-06 17:55:28.842669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.gSh4r2wfLy 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gSh4r2wfLy 00:18:41.696 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.955 [2024-12-06 17:55:29.654532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.955 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:42.214 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:42.214 [2024-12-06 17:55:29.955287] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.214 [2024-12-06 17:55:29.955508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.214 17:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.473 malloc0 00:18:42.473 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:42.473 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:42.731 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3043147 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3043147 /var/tmp/bdevperf.sock 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3043147 ']' 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.991 [2024-12-06 17:55:30.616701] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:42.991 [2024-12-06 17:55:30.616742] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043147 ] 00:18:42.991 [2024-12-06 17:55:30.672785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.991 [2024-12-06 17:55:30.702614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.991 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:43.250 17:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:43.250 [2024-12-06 17:55:31.074036] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.509 nvme0n1 00:18:43.509 17:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:43.509 Running I/O for 1 seconds... 00:18:44.707 3567.00 IOPS, 13.93 MiB/s 00:18:44.707 Latency(us) 00:18:44.707 [2024-12-06T16:55:32.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.707 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:44.707 Verification LBA range: start 0x0 length 0x2000 00:18:44.707 nvme0n1 : 1.06 3493.08 13.64 0.00 0.00 35736.25 4560.21 53084.16 00:18:44.707 [2024-12-06T16:55:32.534Z] =================================================================================================================== 00:18:44.707 [2024-12-06T16:55:32.534Z] Total : 3493.08 13.64 0.00 0.00 35736.25 4560.21 53084.16 00:18:44.707 { 00:18:44.707 "results": [ 00:18:44.707 { 00:18:44.707 "job": "nvme0n1", 00:18:44.707 "core_mask": "0x2", 00:18:44.707 "workload": "verify", 00:18:44.707 "status": "finished", 00:18:44.707 "verify_range": { 00:18:44.707 "start": 0, 00:18:44.707 "length": 8192 00:18:44.707 }, 00:18:44.707 "queue_depth": 128, 00:18:44.707 "io_size": 4096, 00:18:44.707 "runtime": 1.057807, 00:18:44.707 "iops": 3493.0757690202468, 00:18:44.707 "mibps": 13.644827222735339, 00:18:44.707 "io_failed": 0, 00:18:44.707 "io_timeout": 0, 00:18:44.707 "avg_latency_us": 35736.249447000446, 00:18:44.708 "min_latency_us": 4560.213333333333, 00:18:44.708 "max_latency_us": 53084.16 00:18:44.708 } 00:18:44.708 ], 00:18:44.708 "core_count": 1 00:18:44.708 } 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3043147 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3043147 ']' 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3043147 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043147 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043147' 00:18:44.708 killing process with pid 3043147 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3043147 00:18:44.708 Received shutdown signal, test time was about 1.000000 seconds 00:18:44.708 00:18:44.708 Latency(us) 00:18:44.708 [2024-12-06T16:55:32.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.708 [2024-12-06T16:55:32.535Z] =================================================================================================================== 00:18:44.708 [2024-12-06T16:55:32.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3043147 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3042710 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3042710 ']' 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3042710 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3042710 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3042710' 00:18:44.708 killing process with pid 3042710 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3042710 00:18:44.708 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3042710 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3043499 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3043499 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3043499 ']' 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.968 17:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.968 [2024-12-06 17:55:32.671391] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:44.968 [2024-12-06 17:55:32.671448] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.968 [2024-12-06 17:55:32.757898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.968 [2024-12-06 17:55:32.793851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.968 [2024-12-06 17:55:32.793890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.968 [2024-12-06 17:55:32.793898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.968 [2024-12-06 17:55:32.793905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.968 [2024-12-06 17:55:32.793910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.968 [2024-12-06 17:55:32.794511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.905 [2024-12-06 17:55:33.476044] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.905 malloc0 00:18:45.905 [2024-12-06 17:55:33.502592] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.905 [2024-12-06 17:55:33.502826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3043849 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3043849 /var/tmp/bdevperf.sock 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3043849 ']' 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:45.905 [2024-12-06 17:55:33.564711] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:45.905 [2024-12-06 17:55:33.564763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043849 ] 00:18:45.905 [2024-12-06 17:55:33.630376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.905 [2024-12-06 17:55:33.659930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.905 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gSh4r2wfLy 00:18:46.164 17:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:46.423 [2024-12-06 17:55:34.027281] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.423 nvme0n1 00:18:46.424 17:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:46.424 Running I/O for 1 seconds... 00:18:47.622 5404.00 IOPS, 21.11 MiB/s 00:18:47.622 Latency(us) 00:18:47.622 [2024-12-06T16:55:35.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.622 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:47.622 Verification LBA range: start 0x0 length 0x2000 00:18:47.622 nvme0n1 : 1.04 5335.40 20.84 0.00 0.00 23637.30 4587.52 58545.49 00:18:47.622 [2024-12-06T16:55:35.449Z] =================================================================================================================== 00:18:47.622 [2024-12-06T16:55:35.449Z] Total : 5335.40 20.84 0.00 0.00 23637.30 4587.52 58545.49 00:18:47.622 { 00:18:47.622 "results": [ 00:18:47.622 { 00:18:47.622 "job": "nvme0n1", 00:18:47.622 "core_mask": "0x2", 00:18:47.622 "workload": "verify", 00:18:47.622 "status": "finished", 00:18:47.622 "verify_range": { 00:18:47.622 "start": 0, 00:18:47.622 "length": 8192 00:18:47.622 }, 00:18:47.622 "queue_depth": 128, 00:18:47.622 "io_size": 4096, 00:18:47.622 "runtime": 1.037035, 00:18:47.622 "iops": 5335.4033373994125, 00:18:47.622 "mibps": 20.841419286716455, 00:18:47.622 "io_failed": 0, 00:18:47.622 "io_timeout": 0, 00:18:47.622 "avg_latency_us": 23637.29570215073, 00:18:47.622 "min_latency_us": 4587.52, 00:18:47.622 "max_latency_us": 58545.49333333333 00:18:47.622 } 00:18:47.622 ], 00:18:47.622 "core_count": 1 00:18:47.622 } 00:18:47.622 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:47.622 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.622 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.622 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.622 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:47.622 "subsystems": [ 00:18:47.622 { 00:18:47.622 "subsystem": "keyring", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "keyring_file_add_key", 00:18:47.622 "params": { 00:18:47.622 "name": "key0", 00:18:47.622 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:47.622 } 00:18:47.622 } 00:18:47.622 ] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "iobuf", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "iobuf_set_options", 00:18:47.622 "params": { 00:18:47.622 "small_pool_count": 8192, 00:18:47.622 "large_pool_count": 1024, 00:18:47.622 "small_bufsize": 8192, 00:18:47.622 "large_bufsize": 135168, 00:18:47.622 "enable_numa": false 00:18:47.622 } 00:18:47.622 } 00:18:47.622 ] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "sock", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "sock_set_default_impl", 00:18:47.622 "params": { 00:18:47.622 "impl_name": "posix" 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "sock_impl_set_options", 00:18:47.622 "params": { 00:18:47.622 "impl_name": "ssl", 00:18:47.622 "recv_buf_size": 4096, 00:18:47.622 "send_buf_size": 4096, 00:18:47.622 "enable_recv_pipe": true, 00:18:47.622 "enable_quickack": false, 00:18:47.622 "enable_placement_id": 0, 00:18:47.622 "enable_zerocopy_send_server": true, 00:18:47.622 "enable_zerocopy_send_client": false, 00:18:47.622 "zerocopy_threshold": 0, 00:18:47.622 "tls_version": 0, 00:18:47.622 "enable_ktls": false 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "sock_impl_set_options", 00:18:47.622 "params": { 00:18:47.622 "impl_name": "posix", 00:18:47.622 "recv_buf_size": 2097152, 00:18:47.622 "send_buf_size": 2097152, 00:18:47.622 "enable_recv_pipe": true, 00:18:47.622 "enable_quickack": false, 00:18:47.622 "enable_placement_id": 0, 00:18:47.622 "enable_zerocopy_send_server": true, 00:18:47.622 "enable_zerocopy_send_client": false, 00:18:47.622 "zerocopy_threshold": 0, 00:18:47.622 "tls_version": 0, 00:18:47.622 "enable_ktls": false 00:18:47.622 } 00:18:47.622 } 00:18:47.622 ] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "vmd", 00:18:47.622 "config": [] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "accel", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "accel_set_options", 00:18:47.622 "params": { 00:18:47.622 "small_cache_size": 128, 00:18:47.622 "large_cache_size": 16, 00:18:47.622 "task_count": 2048, 00:18:47.622 "sequence_count": 2048, 00:18:47.622 "buf_count": 2048 00:18:47.622 } 00:18:47.622 } 00:18:47.622 ] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "bdev", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "bdev_set_options", 00:18:47.622 "params": { 00:18:47.622 "bdev_io_pool_size": 65535, 00:18:47.622 "bdev_io_cache_size": 256, 00:18:47.622 "bdev_auto_examine": true, 00:18:47.622 "iobuf_small_cache_size": 128, 00:18:47.622 "iobuf_large_cache_size": 16 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "bdev_raid_set_options", 00:18:47.622 "params": { 00:18:47.622 "process_window_size_kb": 1024, 00:18:47.622 "process_max_bandwidth_mb_sec": 0 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "bdev_iscsi_set_options", 00:18:47.622 "params": { 00:18:47.622 "timeout_sec": 30 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "bdev_nvme_set_options", 00:18:47.622 "params": { 00:18:47.622 "action_on_timeout": "none", 00:18:47.622 "timeout_us": 0, 00:18:47.622 "timeout_admin_us": 0, 00:18:47.622 "keep_alive_timeout_ms": 10000, 00:18:47.622 "arbitration_burst": 0, 00:18:47.622 "low_priority_weight": 0, 00:18:47.622 "medium_priority_weight": 0, 00:18:47.622 "high_priority_weight": 0, 00:18:47.622 "nvme_adminq_poll_period_us": 10000, 00:18:47.622 "nvme_ioq_poll_period_us": 0, 00:18:47.622 "io_queue_requests": 0, 00:18:47.622 "delay_cmd_submit": true, 00:18:47.622 "transport_retry_count": 4, 00:18:47.622 "bdev_retry_count": 3, 00:18:47.622 "transport_ack_timeout": 0, 00:18:47.622 "ctrlr_loss_timeout_sec": 0, 00:18:47.622 "reconnect_delay_sec": 0, 00:18:47.622 "fast_io_fail_timeout_sec": 0, 00:18:47.622 "disable_auto_failback": false, 00:18:47.622 "generate_uuids": false, 00:18:47.622 "transport_tos": 0, 00:18:47.622 "nvme_error_stat": false, 00:18:47.622 "rdma_srq_size": 0, 00:18:47.622 "io_path_stat": false, 00:18:47.622 "allow_accel_sequence": false, 00:18:47.622 "rdma_max_cq_size": 0, 00:18:47.622 "rdma_cm_event_timeout_ms": 0, 00:18:47.622 "dhchap_digests": [ 00:18:47.622 "sha256", 00:18:47.622 "sha384", 00:18:47.622 "sha512" 00:18:47.622 ], 00:18:47.622 "dhchap_dhgroups": [ 00:18:47.622 "null", 00:18:47.622 "ffdhe2048", 00:18:47.622 "ffdhe3072", 00:18:47.622 "ffdhe4096", 00:18:47.622 "ffdhe6144", 00:18:47.622 "ffdhe8192" 00:18:47.622 ], 00:18:47.622 "rdma_umr_per_io": false 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "bdev_nvme_set_hotplug", 00:18:47.622 "params": { 00:18:47.622 "period_us": 100000, 00:18:47.622 "enable": false 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "bdev_malloc_create", 00:18:47.622 "params": { 00:18:47.622 "name": "malloc0", 00:18:47.622 "num_blocks": 8192, 00:18:47.622 "block_size": 4096, 00:18:47.622 "physical_block_size": 4096, 00:18:47.622 "uuid": "4ac9207a-d72c-48c7-a899-83d9a5cc627b", 00:18:47.622 "optimal_io_boundary": 0, 00:18:47.622 "md_size": 0, 00:18:47.622 "dif_type": 0, 00:18:47.622 "dif_is_head_of_md": false, 00:18:47.622 "dif_pi_format": 0 00:18:47.622 } 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "method": "bdev_wait_for_examine" 00:18:47.622 } 00:18:47.622 ] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "nbd", 00:18:47.622 "config": [] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "scheduler", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "framework_set_scheduler", 00:18:47.622 "params": { 00:18:47.622 "name": "static" 00:18:47.622 } 00:18:47.622 } 00:18:47.622 ] 00:18:47.622 }, 00:18:47.622 { 00:18:47.622 "subsystem": "nvmf", 00:18:47.622 "config": [ 00:18:47.622 { 00:18:47.622 "method": "nvmf_set_config", 00:18:47.622 "params": { 00:18:47.622 "discovery_filter": "match_any", 00:18:47.622 "admin_cmd_passthru": { 00:18:47.623 "identify_ctrlr": false 00:18:47.623 }, 00:18:47.623 "dhchap_digests": [ 00:18:47.623 "sha256", 00:18:47.623 "sha384", 00:18:47.623 "sha512" 00:18:47.623 ], 00:18:47.623 "dhchap_dhgroups": [ 00:18:47.623 "null", 00:18:47.623 "ffdhe2048", 00:18:47.623 "ffdhe3072", 00:18:47.623 "ffdhe4096", 00:18:47.623 "ffdhe6144", 00:18:47.623 "ffdhe8192" 00:18:47.623 ] 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_set_max_subsystems", 00:18:47.623 "params": { 00:18:47.623 "max_subsystems": 1024 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_set_crdt", 00:18:47.623 "params": { 00:18:47.623 "crdt1": 0, 00:18:47.623 "crdt2": 0, 00:18:47.623 "crdt3": 0 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_create_transport", 00:18:47.623 "params": { 00:18:47.623 "trtype": "TCP", 00:18:47.623 "max_queue_depth": 128, 00:18:47.623 "max_io_qpairs_per_ctrlr": 127, 00:18:47.623 "in_capsule_data_size": 4096, 00:18:47.623 "max_io_size": 131072, 00:18:47.623 "io_unit_size": 131072, 00:18:47.623 "max_aq_depth": 128, 00:18:47.623 "num_shared_buffers": 511, 00:18:47.623 "buf_cache_size": 4294967295, 00:18:47.623 "dif_insert_or_strip": false, 00:18:47.623 "zcopy": false, 00:18:47.623 "c2h_success": false, 00:18:47.623 "sock_priority": 0, 00:18:47.623 "abort_timeout_sec": 1, 00:18:47.623 "ack_timeout": 0, 00:18:47.623 "data_wr_pool_size": 0 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_create_subsystem", 00:18:47.623 "params": { 00:18:47.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.623 "allow_any_host": false, 00:18:47.623 "serial_number": "00000000000000000000", 00:18:47.623 "model_number": "SPDK bdev Controller", 00:18:47.623 "max_namespaces": 32, 00:18:47.623 "min_cntlid": 1, 00:18:47.623 "max_cntlid": 65519, 00:18:47.623 "ana_reporting": false 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_subsystem_add_host", 00:18:47.623 "params": { 00:18:47.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.623 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.623 "psk": "key0" 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_subsystem_add_ns", 00:18:47.623 "params": { 00:18:47.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.623 "namespace": { 00:18:47.623 "nsid": 1, 00:18:47.623 "bdev_name": "malloc0", 00:18:47.623 "nguid": "4AC9207AD72C48C7A89983D9A5CC627B", 00:18:47.623 "uuid": "4ac9207a-d72c-48c7-a899-83d9a5cc627b", 00:18:47.623 "no_auto_visible": false 00:18:47.623 } 00:18:47.623 } 00:18:47.623 }, 00:18:47.623 { 00:18:47.623 "method": "nvmf_subsystem_add_listener", 00:18:47.623 "params": { 00:18:47.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.623 "listen_address": { 00:18:47.623 "trtype": "TCP", 00:18:47.623 "adrfam": "IPv4", 00:18:47.623 "traddr": "10.0.0.2", 00:18:47.623 "trsvcid": "4420" 00:18:47.623 }, 00:18:47.623 "secure_channel": false, 00:18:47.623 "sock_impl": "ssl" 00:18:47.623 } 00:18:47.623 } 00:18:47.623 ] 00:18:47.623 } 00:18:47.623 ] 00:18:47.623 }' 00:18:47.623 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:47.886 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:47.886 "subsystems": [ 00:18:47.886 { 00:18:47.886 "subsystem": "keyring", 00:18:47.886 "config": [ 00:18:47.886 { 00:18:47.886 "method": "keyring_file_add_key", 00:18:47.886 "params": { 00:18:47.886 "name": "key0", 00:18:47.886 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:47.886 } 00:18:47.886 } 00:18:47.886 ] 00:18:47.886 }, 00:18:47.886 { 00:18:47.886 "subsystem": "iobuf", 00:18:47.886 "config": [ 00:18:47.886 { 00:18:47.886 "method": "iobuf_set_options", 00:18:47.886 "params": { 00:18:47.886 "small_pool_count": 8192, 00:18:47.886 "large_pool_count": 1024, 00:18:47.886 "small_bufsize": 8192, 00:18:47.886 "large_bufsize": 135168, 00:18:47.886 "enable_numa": false 00:18:47.886 } 00:18:47.886 } 00:18:47.886 ] 00:18:47.886 }, 00:18:47.886 { 00:18:47.886 "subsystem": "sock", 00:18:47.886 "config": [ 00:18:47.886 { 00:18:47.886 "method": "sock_set_default_impl", 00:18:47.886 "params": { 00:18:47.886 "impl_name": "posix" 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "sock_impl_set_options", 00:18:47.887 "params": { 00:18:47.887 "impl_name": "ssl", 00:18:47.887 "recv_buf_size": 4096, 00:18:47.887 "send_buf_size": 4096, 00:18:47.887 "enable_recv_pipe": true, 00:18:47.887 "enable_quickack": false, 00:18:47.887 "enable_placement_id": 0, 00:18:47.887 "enable_zerocopy_send_server": true, 00:18:47.887 "enable_zerocopy_send_client": false, 00:18:47.887 "zerocopy_threshold": 0, 00:18:47.887 "tls_version": 0, 00:18:47.887 "enable_ktls": false 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "sock_impl_set_options", 00:18:47.887 "params": { 00:18:47.887 "impl_name": "posix", 00:18:47.887 "recv_buf_size": 2097152, 00:18:47.887 "send_buf_size": 2097152, 00:18:47.887 "enable_recv_pipe": true, 00:18:47.887 "enable_quickack": false, 00:18:47.887 "enable_placement_id": 0, 00:18:47.887 "enable_zerocopy_send_server": true, 00:18:47.887 "enable_zerocopy_send_client": false, 00:18:47.887 "zerocopy_threshold": 0, 00:18:47.887 "tls_version": 0, 00:18:47.887 "enable_ktls": false 00:18:47.887 } 00:18:47.887 } 00:18:47.887 ] 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "subsystem": "vmd", 00:18:47.887 "config": [] 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "subsystem": "accel", 00:18:47.887 "config": [ 00:18:47.887 { 00:18:47.887 "method": "accel_set_options", 00:18:47.887 "params": { 00:18:47.887 "small_cache_size": 128, 00:18:47.887 "large_cache_size": 16, 00:18:47.887 "task_count": 2048, 00:18:47.887 "sequence_count": 2048, 00:18:47.887 "buf_count": 2048 00:18:47.887 } 00:18:47.887 } 00:18:47.887 ] 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "subsystem": "bdev", 00:18:47.887 "config": [ 00:18:47.887 { 00:18:47.887 "method": "bdev_set_options", 00:18:47.887 "params": { 00:18:47.887 "bdev_io_pool_size": 65535, 00:18:47.887 "bdev_io_cache_size": 256, 00:18:47.887 "bdev_auto_examine": true, 00:18:47.887 "iobuf_small_cache_size": 128, 00:18:47.887 "iobuf_large_cache_size": 16 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_raid_set_options", 00:18:47.887 "params": { 00:18:47.887 "process_window_size_kb": 1024, 00:18:47.887 "process_max_bandwidth_mb_sec": 0 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_iscsi_set_options", 00:18:47.887 "params": { 00:18:47.887 "timeout_sec": 30 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_nvme_set_options", 00:18:47.887 "params": { 00:18:47.887 "action_on_timeout": "none", 00:18:47.887 "timeout_us": 0, 00:18:47.887 "timeout_admin_us": 0, 00:18:47.887 "keep_alive_timeout_ms": 10000, 00:18:47.887 "arbitration_burst": 0, 00:18:47.887 "low_priority_weight": 0, 00:18:47.887 "medium_priority_weight": 0, 00:18:47.887 "high_priority_weight": 0, 00:18:47.887 "nvme_adminq_poll_period_us": 10000, 00:18:47.887 "nvme_ioq_poll_period_us": 0, 00:18:47.887 "io_queue_requests": 512, 00:18:47.887 "delay_cmd_submit": true, 00:18:47.887 "transport_retry_count": 4, 00:18:47.887 "bdev_retry_count": 3, 00:18:47.887 "transport_ack_timeout": 0, 00:18:47.887 "ctrlr_loss_timeout_sec": 0, 00:18:47.887 "reconnect_delay_sec": 0, 00:18:47.887 "fast_io_fail_timeout_sec": 0, 00:18:47.887 "disable_auto_failback": false, 00:18:47.887 "generate_uuids": false, 00:18:47.887 "transport_tos": 0, 00:18:47.887 "nvme_error_stat": false, 00:18:47.887 "rdma_srq_size": 0, 00:18:47.887 "io_path_stat": false, 00:18:47.887 "allow_accel_sequence": false, 00:18:47.887 "rdma_max_cq_size": 0, 00:18:47.887 "rdma_cm_event_timeout_ms": 0, 00:18:47.887 "dhchap_digests": [ 00:18:47.887 "sha256", 00:18:47.887 "sha384", 00:18:47.887 "sha512" 00:18:47.887 ], 00:18:47.887 "dhchap_dhgroups": [ 00:18:47.887 "null", 00:18:47.887 "ffdhe2048", 00:18:47.887 "ffdhe3072", 00:18:47.887 "ffdhe4096", 00:18:47.887 "ffdhe6144", 00:18:47.887 "ffdhe8192" 00:18:47.887 ], 00:18:47.887 "rdma_umr_per_io": false 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_nvme_attach_controller", 00:18:47.887 "params": { 00:18:47.887 "name": "nvme0", 00:18:47.887 "trtype": "TCP", 00:18:47.887 "adrfam": "IPv4", 00:18:47.887 "traddr": "10.0.0.2", 00:18:47.887 "trsvcid": "4420", 00:18:47.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.887 "prchk_reftag": false, 00:18:47.887 "prchk_guard": false, 00:18:47.887 "ctrlr_loss_timeout_sec": 0, 00:18:47.887 "reconnect_delay_sec": 0, 00:18:47.887 "fast_io_fail_timeout_sec": 0, 00:18:47.887 "psk": "key0", 00:18:47.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.887 "hdgst": false, 00:18:47.887 "ddgst": false, 00:18:47.887 "multipath": "multipath" 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_nvme_set_hotplug", 00:18:47.887 "params": { 00:18:47.887 "period_us": 100000, 00:18:47.887 "enable": false 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_enable_histogram", 00:18:47.887 "params": { 00:18:47.887 "name": "nvme0n1", 00:18:47.887 "enable": true 00:18:47.887 } 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "method": "bdev_wait_for_examine" 00:18:47.887 } 00:18:47.887 ] 00:18:47.887 }, 00:18:47.887 { 00:18:47.887 "subsystem": "nbd", 00:18:47.887 "config": [] 00:18:47.887 } 00:18:47.887 ] 00:18:47.887 }' 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3043849 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3043849 ']' 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3043849 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043849 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043849' 00:18:47.887 killing process with pid 3043849 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3043849 00:18:47.887 Received shutdown signal, test time was about 1.000000 seconds 00:18:47.887 00:18:47.887 Latency(us) 00:18:47.887 [2024-12-06T16:55:35.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.887 [2024-12-06T16:55:35.714Z] =================================================================================================================== 00:18:47.887 [2024-12-06T16:55:35.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3043849 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3043499 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3043499 ']' 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3043499 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.887 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043499 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043499' 00:18:48.208 killing process with pid 3043499 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3043499 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3043499 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.208 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:48.208 "subsystems": [ 00:18:48.208 { 00:18:48.208 "subsystem": "keyring", 00:18:48.208 "config": [ 00:18:48.208 { 00:18:48.208 "method": "keyring_file_add_key", 00:18:48.208 "params": { 00:18:48.208 "name": "key0", 00:18:48.208 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:48.208 } 00:18:48.208 } 00:18:48.208 ] 00:18:48.208 }, 00:18:48.208 { 00:18:48.208 "subsystem": "iobuf", 00:18:48.208 "config": [ 00:18:48.208 { 00:18:48.208 "method": "iobuf_set_options", 00:18:48.208 "params": { 00:18:48.208 "small_pool_count": 8192, 00:18:48.208 "large_pool_count": 1024, 00:18:48.208 "small_bufsize": 8192, 00:18:48.209 "large_bufsize": 135168, 00:18:48.209 "enable_numa": false 00:18:48.209 } 00:18:48.209 } 00:18:48.209 ] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "sock", 00:18:48.209 "config": [ 00:18:48.209 { 00:18:48.209 "method": "sock_set_default_impl", 00:18:48.209 "params": { 00:18:48.209 "impl_name": "posix" 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "sock_impl_set_options", 00:18:48.209 "params": { 00:18:48.209 "impl_name": "ssl", 00:18:48.209 "recv_buf_size": 4096, 00:18:48.209 "send_buf_size": 4096, 00:18:48.209 "enable_recv_pipe": true, 00:18:48.209 "enable_quickack": false, 00:18:48.209 "enable_placement_id": 0, 00:18:48.209 "enable_zerocopy_send_server": true, 00:18:48.209 "enable_zerocopy_send_client": false, 00:18:48.209 "zerocopy_threshold": 0, 00:18:48.209 "tls_version": 0, 00:18:48.209 "enable_ktls": false 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "sock_impl_set_options", 00:18:48.209 "params": { 00:18:48.209 "impl_name": "posix", 00:18:48.209 "recv_buf_size": 2097152, 00:18:48.209 "send_buf_size": 2097152, 00:18:48.209 "enable_recv_pipe": true, 00:18:48.209 "enable_quickack": false, 00:18:48.209 "enable_placement_id": 0, 00:18:48.209 "enable_zerocopy_send_server": true, 00:18:48.209 "enable_zerocopy_send_client": false, 00:18:48.209 "zerocopy_threshold": 0, 00:18:48.209 "tls_version": 0, 00:18:48.209 "enable_ktls": false 00:18:48.209 } 00:18:48.209 } 00:18:48.209 ] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "vmd", 00:18:48.209 "config": [] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "accel", 00:18:48.209 "config": [ 00:18:48.209 { 00:18:48.209 "method": "accel_set_options", 00:18:48.209 "params": { 00:18:48.209 "small_cache_size": 128, 00:18:48.209 "large_cache_size": 16, 00:18:48.209 "task_count": 2048, 00:18:48.209 "sequence_count": 2048, 00:18:48.209 "buf_count": 2048 00:18:48.209 } 00:18:48.209 } 00:18:48.209 ] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "bdev", 00:18:48.209 "config": [ 00:18:48.209 { 00:18:48.209 "method": "bdev_set_options", 00:18:48.209 "params": { 00:18:48.209 "bdev_io_pool_size": 65535, 00:18:48.209 "bdev_io_cache_size": 256, 00:18:48.209 "bdev_auto_examine": true, 00:18:48.209 "iobuf_small_cache_size": 128, 00:18:48.209 "iobuf_large_cache_size": 16 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "bdev_raid_set_options", 00:18:48.209 "params": { 00:18:48.209 "process_window_size_kb": 1024, 00:18:48.209 "process_max_bandwidth_mb_sec": 0 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "bdev_iscsi_set_options", 00:18:48.209 "params": { 00:18:48.209 "timeout_sec": 30 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "bdev_nvme_set_options", 00:18:48.209 "params": { 00:18:48.209 "action_on_timeout": "none", 00:18:48.209 "timeout_us": 0, 00:18:48.209 "timeout_admin_us": 0, 00:18:48.209 "keep_alive_timeout_ms": 10000, 00:18:48.209 "arbitration_burst": 0, 00:18:48.209 "low_priority_weight": 0, 00:18:48.209 "medium_priority_weight": 0, 00:18:48.209 "high_priority_weight": 0, 00:18:48.209 "nvme_adminq_poll_period_us": 10000, 00:18:48.209 "nvme_ioq_poll_period_us": 0, 00:18:48.209 "io_queue_requests": 0, 00:18:48.209 "delay_cmd_submit": true, 00:18:48.209 "transport_retry_count": 4, 00:18:48.209 "bdev_retry_count": 3, 00:18:48.209 "transport_ack_timeout": 0, 00:18:48.209 "ctrlr_loss_timeout_sec": 0, 00:18:48.209 "reconnect_delay_sec": 0, 00:18:48.209 "fast_io_fail_timeout_sec": 0, 00:18:48.209 "disable_auto_failback": false, 00:18:48.209 "generate_uuids": false, 00:18:48.209 "transport_tos": 0, 00:18:48.209 "nvme_error_stat": false, 00:18:48.209 "rdma_srq_size": 0, 00:18:48.209 "io_path_stat": false, 00:18:48.209 "allow_accel_sequence": false, 00:18:48.209 "rdma_max_cq_size": 0, 00:18:48.209 "rdma_cm_event_timeout_ms": 0, 00:18:48.209 "dhchap_digests": [ 00:18:48.209 "sha256", 00:18:48.209 "sha384", 00:18:48.209 "sha512" 00:18:48.209 ], 00:18:48.209 "dhchap_dhgroups": [ 00:18:48.209 "null", 00:18:48.209 "ffdhe2048", 00:18:48.209 "ffdhe3072", 00:18:48.209 "ffdhe4096", 00:18:48.209 "ffdhe6144", 00:18:48.209 "ffdhe8192" 00:18:48.209 ], 00:18:48.209 "rdma_umr_per_io": false 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "bdev_nvme_set_hotplug", 00:18:48.209 "params": { 00:18:48.209 "period_us": 100000, 00:18:48.209 "enable": false 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "bdev_malloc_create", 00:18:48.209 "params": { 00:18:48.209 "name": "malloc0", 00:18:48.209 "num_blocks": 8192, 00:18:48.209 "block_size": 4096, 00:18:48.209 "physical_block_size": 4096, 00:18:48.209 "uuid": "4ac9207a-d72c-48c7-a899-83d9a5cc627b", 00:18:48.209 "optimal_io_boundary": 0, 00:18:48.209 "md_size": 0, 00:18:48.209 "dif_type": 0, 00:18:48.209 "dif_is_head_of_md": false, 00:18:48.209 "dif_pi_format": 0 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "bdev_wait_for_examine" 00:18:48.209 } 00:18:48.209 ] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "nbd", 00:18:48.209 "config": [] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "scheduler", 00:18:48.209 "config": [ 00:18:48.209 { 00:18:48.209 "method": "framework_set_scheduler", 00:18:48.209 "params": { 00:18:48.209 "name": "static" 00:18:48.209 } 00:18:48.209 } 00:18:48.209 ] 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "subsystem": "nvmf", 00:18:48.209 "config": [ 00:18:48.209 { 00:18:48.209 "method": "nvmf_set_config", 00:18:48.209 "params": { 00:18:48.209 "discovery_filter": "match_any", 00:18:48.209 "admin_cmd_passthru": { 00:18:48.209 "identify_ctrlr": false 00:18:48.209 }, 00:18:48.209 "dhchap_digests": [ 00:18:48.209 "sha256", 00:18:48.209 "sha384", 00:18:48.209 "sha512" 00:18:48.209 ], 00:18:48.209 "dhchap_dhgroups": [ 00:18:48.209 "null", 00:18:48.209 "ffdhe2048", 00:18:48.209 "ffdhe3072", 00:18:48.209 "ffdhe4096", 00:18:48.209 "ffdhe6144", 00:18:48.209 "ffdhe8192" 00:18:48.209 ] 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_set_max_subsystems", 00:18:48.209 "params": { 00:18:48.209 "max_subsystems": 1024 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_set_crdt", 00:18:48.209 "params": { 00:18:48.209 "crdt1": 0, 00:18:48.209 "crdt2": 0, 00:18:48.209 "crdt3": 0 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_create_transport", 00:18:48.209 "params": { 00:18:48.209 "trtype": "TCP", 00:18:48.209 "max_queue_depth": 128, 00:18:48.209 "max_io_qpairs_per_ctrlr": 127, 00:18:48.209 "in_capsule_data_size": 4096, 00:18:48.209 "max_io_size": 131072, 00:18:48.209 "io_unit_size": 131072, 00:18:48.209 "max_aq_depth": 128, 00:18:48.209 "num_shared_buffers": 511, 00:18:48.209 "buf_cache_size": 4294967295, 00:18:48.209 "dif_insert_or_strip": false, 00:18:48.209 "zcopy": false, 00:18:48.209 "c2h_success": false, 00:18:48.209 "sock_priority": 0, 00:18:48.209 "abort_timeout_sec": 1, 00:18:48.209 "ack_timeout": 0, 00:18:48.209 "data_wr_pool_size": 0 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_create_subsystem", 00:18:48.209 "params": { 00:18:48.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.209 "allow_any_host": false, 00:18:48.209 "serial_number": "00000000000000000000", 00:18:48.209 "model_number": "SPDK bdev Controller", 00:18:48.209 "max_namespaces": 32, 00:18:48.209 "min_cntlid": 1, 00:18:48.209 "max_cntlid": 65519, 00:18:48.209 "ana_reporting": false 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_subsystem_add_host", 00:18:48.209 "params": { 00:18:48.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.209 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.209 "psk": "key0" 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_subsystem_add_ns", 00:18:48.209 "params": { 00:18:48.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.209 "namespace": { 00:18:48.209 "nsid": 1, 00:18:48.209 "bdev_name": "malloc0", 00:18:48.209 "nguid": "4AC9207AD72C48C7A89983D9A5CC627B", 00:18:48.209 "uuid": "4ac9207a-d72c-48c7-a899-83d9a5cc627b", 00:18:48.209 "no_auto_visible": false 00:18:48.209 } 00:18:48.209 } 00:18:48.209 }, 00:18:48.209 { 00:18:48.209 "method": "nvmf_subsystem_add_listener", 00:18:48.209 "params": { 00:18:48.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.209 "listen_address": { 00:18:48.209 "trtype": "TCP", 00:18:48.209 "adrfam": "IPv4", 00:18:48.209 "traddr": "10.0.0.2", 00:18:48.209 "trsvcid": "4420" 00:18:48.209 }, 00:18:48.209 "secure_channel": false, 00:18:48.209 "sock_impl": "ssl" 00:18:48.209 } 00:18:48.209 } 00:18:48.209 ] 00:18:48.210 } 00:18:48.210 ] 00:18:48.210 }' 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3044206 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3044206 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3044206 ']' 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.210 17:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.210 [2024-12-06 17:55:35.893404] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:48.210 [2024-12-06 17:55:35.893459] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.210 [2024-12-06 17:55:35.966382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.210 [2024-12-06 17:55:35.993938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.210 [2024-12-06 17:55:35.993967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.210 [2024-12-06 17:55:35.993974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.210 [2024-12-06 17:55:35.993979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.210 [2024-12-06 17:55:35.993983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.210 [2024-12-06 17:55:35.994477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.495 [2024-12-06 17:55:36.189822] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.495 [2024-12-06 17:55:36.221856] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:48.495 [2024-12-06 17:55:36.222054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3044559 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3044559 /var/tmp/bdevperf.sock 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3044559 ']' 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:49.064 17:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:49.064 "subsystems": [ 00:18:49.064 { 00:18:49.064 "subsystem": "keyring", 00:18:49.064 "config": [ 00:18:49.064 { 00:18:49.064 "method": "keyring_file_add_key", 00:18:49.064 "params": { 00:18:49.064 "name": "key0", 00:18:49.064 "path": "/tmp/tmp.gSh4r2wfLy" 00:18:49.064 } 00:18:49.064 } 00:18:49.064 ] 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "subsystem": "iobuf", 00:18:49.064 "config": [ 00:18:49.064 { 00:18:49.064 "method": "iobuf_set_options", 00:18:49.064 "params": { 00:18:49.064 "small_pool_count": 8192, 00:18:49.064 "large_pool_count": 1024, 00:18:49.064 "small_bufsize": 8192, 00:18:49.064 "large_bufsize": 135168, 00:18:49.064 "enable_numa": false 00:18:49.064 } 00:18:49.064 } 00:18:49.064 ] 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "subsystem": "sock", 00:18:49.064 "config": [ 00:18:49.064 { 00:18:49.064 "method": "sock_set_default_impl", 00:18:49.064 "params": { 00:18:49.064 "impl_name": "posix" 00:18:49.064 } 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "method": "sock_impl_set_options", 00:18:49.064 "params": { 00:18:49.064 "impl_name": "ssl", 00:18:49.064 "recv_buf_size": 4096, 00:18:49.064 "send_buf_size": 4096, 00:18:49.064 "enable_recv_pipe": true, 00:18:49.064 "enable_quickack": false, 00:18:49.064 "enable_placement_id": 0, 00:18:49.064 "enable_zerocopy_send_server": true, 00:18:49.064 "enable_zerocopy_send_client": false, 00:18:49.064 "zerocopy_threshold": 0, 00:18:49.064 "tls_version": 0, 00:18:49.064 "enable_ktls": false 00:18:49.064 } 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "method": "sock_impl_set_options", 00:18:49.064 "params": { 00:18:49.064 "impl_name": "posix", 00:18:49.064 "recv_buf_size": 2097152, 00:18:49.064 "send_buf_size": 2097152, 00:18:49.064 "enable_recv_pipe": true, 00:18:49.064 "enable_quickack": false, 00:18:49.064 "enable_placement_id": 0, 00:18:49.064 "enable_zerocopy_send_server": true, 00:18:49.064 "enable_zerocopy_send_client": false, 00:18:49.064 "zerocopy_threshold": 0, 00:18:49.064 "tls_version": 0, 00:18:49.064 "enable_ktls": false 00:18:49.064 } 00:18:49.064 } 00:18:49.064 ] 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "subsystem": "vmd", 00:18:49.064 "config": [] 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "subsystem": "accel", 00:18:49.064 "config": [ 00:18:49.064 { 00:18:49.064 "method": "accel_set_options", 00:18:49.064 "params": { 00:18:49.064 "small_cache_size": 128, 00:18:49.064 "large_cache_size": 16, 00:18:49.064 "task_count": 2048, 00:18:49.064 "sequence_count": 2048, 00:18:49.064 "buf_count": 2048 00:18:49.064 } 00:18:49.064 } 00:18:49.064 ] 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "subsystem": "bdev", 00:18:49.064 "config": [ 00:18:49.064 { 00:18:49.064 "method": "bdev_set_options", 00:18:49.064 "params": { 00:18:49.064 "bdev_io_pool_size": 65535, 00:18:49.064 "bdev_io_cache_size": 256, 00:18:49.064 "bdev_auto_examine": true, 00:18:49.064 "iobuf_small_cache_size": 128, 00:18:49.064 "iobuf_large_cache_size": 16 00:18:49.064 } 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "method": "bdev_raid_set_options", 00:18:49.064 "params": { 00:18:49.064 "process_window_size_kb": 1024, 00:18:49.064 "process_max_bandwidth_mb_sec": 0 00:18:49.064 } 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "method": "bdev_iscsi_set_options", 00:18:49.064 "params": { 00:18:49.064 "timeout_sec": 30 00:18:49.064 } 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "method": "bdev_nvme_set_options", 00:18:49.064 "params": { 00:18:49.064 "action_on_timeout": "none", 00:18:49.064 "timeout_us": 0, 00:18:49.064 "timeout_admin_us": 0, 00:18:49.064 "keep_alive_timeout_ms": 10000, 00:18:49.064 "arbitration_burst": 0, 00:18:49.064 "low_priority_weight": 0, 00:18:49.064 "medium_priority_weight": 0, 00:18:49.064 "high_priority_weight": 0, 00:18:49.064 "nvme_adminq_poll_period_us": 10000, 00:18:49.064 "nvme_ioq_poll_period_us": 0, 00:18:49.064 "io_queue_requests": 512, 00:18:49.064 "delay_cmd_submit": true, 00:18:49.064 "transport_retry_count": 4, 00:18:49.064 "bdev_retry_count": 3, 00:18:49.064 "transport_ack_timeout": 0, 00:18:49.064 "ctrlr_loss_timeout_sec": 0, 00:18:49.064 "reconnect_delay_sec": 0, 00:18:49.064 "fast_io_fail_timeout_sec": 0, 00:18:49.064 "disable_auto_failback": false, 00:18:49.064 "generate_uuids": false, 00:18:49.064 "transport_tos": 0, 00:18:49.064 "nvme_error_stat": false, 00:18:49.064 "rdma_srq_size": 0, 00:18:49.064 "io_path_stat": false, 00:18:49.064 "allow_accel_sequence": false, 00:18:49.064 "rdma_max_cq_size": 0, 00:18:49.064 "rdma_cm_event_timeout_ms": 0, 00:18:49.064 "dhchap_digests": [ 00:18:49.064 "sha256", 00:18:49.064 "sha384", 00:18:49.064 "sha512" 00:18:49.064 ], 00:18:49.064 "dhchap_dhgroups": [ 00:18:49.064 "null", 00:18:49.064 "ffdhe2048", 00:18:49.064 "ffdhe3072", 00:18:49.064 "ffdhe4096", 00:18:49.064 "ffdhe6144", 00:18:49.064 "ffdhe8192" 00:18:49.064 ], 00:18:49.064 "rdma_umr_per_io": false 00:18:49.064 } 00:18:49.064 }, 00:18:49.064 { 00:18:49.064 "method": "bdev_nvme_attach_controller", 00:18:49.064 "params": { 00:18:49.064 "name": "nvme0", 00:18:49.064 "trtype": "TCP", 00:18:49.065 "adrfam": "IPv4", 00:18:49.065 "traddr": "10.0.0.2", 00:18:49.065 "trsvcid": "4420", 00:18:49.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.065 "prchk_reftag": false, 00:18:49.065 "prchk_guard": false, 00:18:49.065 "ctrlr_loss_timeout_sec": 0, 00:18:49.065 "reconnect_delay_sec": 0, 00:18:49.065 "fast_io_fail_timeout_sec": 0, 00:18:49.065 "psk": "key0", 00:18:49.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.065 "hdgst": false, 00:18:49.065 "ddgst": false, 00:18:49.065 "multipath": "multipath" 00:18:49.065 } 00:18:49.065 }, 00:18:49.065 { 00:18:49.065 "method": "bdev_nvme_set_hotplug", 00:18:49.065 "params": { 00:18:49.065 "period_us": 100000, 00:18:49.065 "enable": false 00:18:49.065 } 00:18:49.065 }, 00:18:49.065 { 00:18:49.065 "method": "bdev_enable_histogram", 00:18:49.065 "params": { 00:18:49.065 "name": "nvme0n1", 00:18:49.065 "enable": true 00:18:49.065 } 00:18:49.065 }, 00:18:49.065 { 00:18:49.065 "method": "bdev_wait_for_examine" 00:18:49.065 } 00:18:49.065 ] 00:18:49.065 }, 00:18:49.065 { 00:18:49.065 "subsystem": "nbd", 00:18:49.065 "config": [] 00:18:49.065 } 00:18:49.065 ] 00:18:49.065 }' 00:18:49.065 [2024-12-06 17:55:36.725131] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:49.065 [2024-12-06 17:55:36.725185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044559 ] 00:18:49.065 [2024-12-06 17:55:36.795891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.065 [2024-12-06 17:55:36.825755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.324 [2024-12-06 17:55:36.962090] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.893 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.893 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.893 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:49.893 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:49.893 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.893 17:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.153 Running I/O for 1 seconds... 00:18:51.091 4545.00 IOPS, 17.75 MiB/s 00:18:51.091 Latency(us) 00:18:51.091 [2024-12-06T16:55:38.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.091 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.091 Verification LBA range: start 0x0 length 0x2000 00:18:51.091 nvme0n1 : 1.01 4627.04 18.07 0.00 0.00 27520.03 3577.17 118838.61 00:18:51.091 [2024-12-06T16:55:38.918Z] =================================================================================================================== 00:18:51.091 [2024-12-06T16:55:38.918Z] Total : 4627.04 18.07 0.00 0.00 27520.03 3577.17 118838.61 00:18:51.091 { 00:18:51.091 "results": [ 00:18:51.091 { 00:18:51.091 "job": "nvme0n1", 00:18:51.091 "core_mask": "0x2", 00:18:51.091 "workload": "verify", 00:18:51.091 "status": "finished", 00:18:51.091 "verify_range": { 00:18:51.091 "start": 0, 00:18:51.091 "length": 8192 00:18:51.091 }, 00:18:51.091 "queue_depth": 128, 00:18:51.091 "io_size": 4096, 00:18:51.091 "runtime": 1.009932, 00:18:51.091 "iops": 4627.044197035048, 00:18:51.091 "mibps": 18.074391394668154, 00:18:51.091 "io_failed": 0, 00:18:51.091 "io_timeout": 0, 00:18:51.091 "avg_latency_us": 27520.033234895498, 00:18:51.091 "min_latency_us": 3577.173333333333, 00:18:51.091 "max_latency_us": 118838.61333333333 00:18:51.092 } 00:18:51.092 ], 00:18:51.092 "core_count": 1 00:18:51.092 } 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:51.092 nvmf_trace.0 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3044559 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3044559 ']' 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3044559 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044559 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044559' 00:18:51.092 killing process with pid 3044559 00:18:51.092 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3044559 00:18:51.092 Received shutdown signal, test time was about 1.000000 seconds 00:18:51.092 00:18:51.092 Latency(us) 00:18:51.092 [2024-12-06T16:55:38.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.092 [2024-12-06T16:55:38.919Z] =================================================================================================================== 00:18:51.092 [2024-12-06T16:55:38.919Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.352 17:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3044559 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:51.352 rmmod nvme_tcp 00:18:51.352 rmmod nvme_fabrics 00:18:51.352 rmmod nvme_keyring 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3044206 ']' 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3044206 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3044206 ']' 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3044206 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:51.352 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.353 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044206 00:18:51.353 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.353 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.353 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044206' 00:18:51.353 killing process with pid 3044206 00:18:51.353 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3044206 00:18:51.353 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3044206 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.612 17:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.518 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:53.518 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QzUlFIWUbK /tmp/tmp.SbsIMzouIh /tmp/tmp.gSh4r2wfLy 00:18:53.518 00:18:53.518 real 1m16.644s 00:18:53.518 user 2m1.034s 00:18:53.518 sys 0m22.972s 00:18:53.518 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:53.518 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.518 ************************************ 00:18:53.519 END TEST nvmf_tls 00:18:53.519 ************************************ 00:18:53.519 17:55:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:53.519 17:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:53.519 17:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:53.519 17:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.519 ************************************ 00:18:53.519 START TEST nvmf_fips 00:18:53.519 ************************************ 00:18:53.519 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:53.780 * Looking for test storage... 00:18:53.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:53.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.780 --rc genhtml_branch_coverage=1 00:18:53.780 --rc genhtml_function_coverage=1 00:18:53.780 --rc genhtml_legend=1 00:18:53.780 --rc geninfo_all_blocks=1 00:18:53.780 --rc geninfo_unexecuted_blocks=1 00:18:53.780 00:18:53.780 ' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:53.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.780 --rc genhtml_branch_coverage=1 00:18:53.780 --rc genhtml_function_coverage=1 00:18:53.780 --rc genhtml_legend=1 00:18:53.780 --rc geninfo_all_blocks=1 00:18:53.780 --rc geninfo_unexecuted_blocks=1 00:18:53.780 00:18:53.780 ' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:53.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.780 --rc genhtml_branch_coverage=1 00:18:53.780 --rc genhtml_function_coverage=1 00:18:53.780 --rc genhtml_legend=1 00:18:53.780 --rc geninfo_all_blocks=1 00:18:53.780 --rc geninfo_unexecuted_blocks=1 00:18:53.780 00:18:53.780 ' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:53.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.780 --rc genhtml_branch_coverage=1 00:18:53.780 --rc genhtml_function_coverage=1 00:18:53.780 --rc genhtml_legend=1 00:18:53.780 --rc geninfo_all_blocks=1 00:18:53.780 --rc geninfo_unexecuted_blocks=1 00:18:53.780 00:18:53.780 ' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.780 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:53.781 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:53.782 Error setting digest 00:18:53.782 403255C4EB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:53.782 403255C4EB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:53.782 17:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:59.058 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:59.059 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:59.059 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:59.059 Found net devices under 0000:31:00.0: cvl_0_0 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:59.059 Found net devices under 0000:31:00.1: cvl_0_1 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:59.059 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:59.060 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.060 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.060 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.060 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.060 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:59.060 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:59.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:18:59.319 00:18:59.319 --- 10.0.0.2 ping statistics --- 00:18:59.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.319 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:18:59.319 00:18:59.319 --- 10.0.0.1 ping statistics --- 00:18:59.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.319 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3049485 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3049485 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3049485 ']' 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:59.319 17:55:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:59.319 [2024-12-06 17:55:47.012461] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:18:59.319 [2024-12-06 17:55:47.012533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.319 [2024-12-06 17:55:47.094251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.319 [2024-12-06 17:55:47.131302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.319 [2024-12-06 17:55:47.131345] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.319 [2024-12-06 17:55:47.131351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.319 [2024-12-06 17:55:47.131356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.320 [2024-12-06 17:55:47.131361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.320 [2024-12-06 17:55:47.131943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hAb 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hAb 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hAb 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hAb 00:19:00.255 17:55:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.255 [2024-12-06 17:55:47.960406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.256 [2024-12-06 17:55:47.976414] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.256 [2024-12-06 17:55:47.976569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.256 malloc0 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3049635 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3049635 /var/tmp/bdevperf.sock 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3049635 ']' 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:00.256 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:00.256 [2024-12-06 17:55:48.078770] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:19:00.256 [2024-12-06 17:55:48.078829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049635 ] 00:19:00.516 [2024-12-06 17:55:48.160792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.516 [2024-12-06 17:55:48.196098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.083 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.083 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:01.083 17:55:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hAb 00:19:01.341 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.342 [2024-12-06 17:55:49.144742] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.600 TLSTESTn1 00:19:01.600 17:55:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.600 Running I/O for 10 seconds... 00:19:03.917 6389.00 IOPS, 24.96 MiB/s [2024-12-06T16:55:52.682Z] 6549.50 IOPS, 25.58 MiB/s [2024-12-06T16:55:53.621Z] 6576.00 IOPS, 25.69 MiB/s [2024-12-06T16:55:54.559Z] 6484.75 IOPS, 25.33 MiB/s [2024-12-06T16:55:55.497Z] 6407.80 IOPS, 25.03 MiB/s [2024-12-06T16:55:56.434Z] 6474.17 IOPS, 25.29 MiB/s [2024-12-06T16:55:57.372Z] 6502.43 IOPS, 25.40 MiB/s [2024-12-06T16:55:58.750Z] 6511.62 IOPS, 25.44 MiB/s [2024-12-06T16:55:59.687Z] 6517.00 IOPS, 25.46 MiB/s [2024-12-06T16:55:59.687Z] 6538.20 IOPS, 25.54 MiB/s 00:19:11.860 Latency(us) 00:19:11.860 [2024-12-06T16:55:59.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.860 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.860 Verification LBA range: start 0x0 length 0x2000 00:19:11.860 TLSTESTn1 : 10.01 6542.11 25.56 0.00 0.00 19536.26 4560.21 43035.31 00:19:11.860 [2024-12-06T16:55:59.687Z] =================================================================================================================== 00:19:11.860 [2024-12-06T16:55:59.687Z] Total : 6542.11 25.56 0.00 0.00 19536.26 4560.21 43035.31 00:19:11.860 { 00:19:11.860 "results": [ 00:19:11.860 { 00:19:11.860 "job": "TLSTESTn1", 00:19:11.860 "core_mask": "0x4", 00:19:11.860 "workload": "verify", 00:19:11.860 "status": "finished", 00:19:11.860 "verify_range": { 00:19:11.860 "start": 0, 00:19:11.860 "length": 8192 00:19:11.860 }, 00:19:11.860 "queue_depth": 128, 00:19:11.860 "io_size": 4096, 00:19:11.860 "runtime": 10.013589, 00:19:11.860 "iops": 6542.109926820443, 00:19:11.860 "mibps": 25.555116901642357, 00:19:11.860 "io_failed": 0, 00:19:11.860 "io_timeout": 0, 00:19:11.860 "avg_latency_us": 19536.25892433725, 00:19:11.860 "min_latency_us": 4560.213333333333, 00:19:11.860 "max_latency_us": 43035.306666666664 00:19:11.860 } 00:19:11.860 ], 00:19:11.860 "core_count": 1 00:19:11.860 } 00:19:11.860 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:11.861 nvmf_trace.0 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3049635 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3049635 ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3049635 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049635 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049635' 00:19:11.861 killing process with pid 3049635 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3049635 00:19:11.861 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.861 00:19:11.861 Latency(us) 00:19:11.861 [2024-12-06T16:55:59.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.861 [2024-12-06T16:55:59.688Z] =================================================================================================================== 00:19:11.861 [2024-12-06T16:55:59.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3049635 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.861 rmmod nvme_tcp 00:19:11.861 rmmod nvme_fabrics 00:19:11.861 rmmod nvme_keyring 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3049485 ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3049485 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3049485 ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3049485 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.861 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049485 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049485' 00:19:12.122 killing process with pid 3049485 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3049485 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3049485 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.122 17:55:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.025 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:14.025 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hAb 00:19:14.025 00:19:14.025 real 0m20.521s 00:19:14.025 user 0m24.559s 00:19:14.025 sys 0m6.743s 00:19:14.025 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.025 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:14.025 ************************************ 00:19:14.025 END TEST nvmf_fips 00:19:14.025 ************************************ 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.286 ************************************ 00:19:14.286 START TEST nvmf_control_msg_list 00:19:14.286 ************************************ 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:14.286 * Looking for test storage... 00:19:14.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:14.286 17:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.286 --rc genhtml_branch_coverage=1 00:19:14.286 --rc genhtml_function_coverage=1 00:19:14.286 --rc genhtml_legend=1 00:19:14.286 --rc geninfo_all_blocks=1 00:19:14.286 --rc geninfo_unexecuted_blocks=1 00:19:14.286 00:19:14.286 ' 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.286 --rc genhtml_branch_coverage=1 00:19:14.286 --rc genhtml_function_coverage=1 00:19:14.286 --rc genhtml_legend=1 00:19:14.286 --rc geninfo_all_blocks=1 00:19:14.286 --rc geninfo_unexecuted_blocks=1 00:19:14.286 00:19:14.286 ' 00:19:14.286 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:14.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.286 --rc genhtml_branch_coverage=1 00:19:14.286 --rc genhtml_function_coverage=1 00:19:14.286 --rc genhtml_legend=1 00:19:14.286 --rc geninfo_all_blocks=1 00:19:14.286 --rc geninfo_unexecuted_blocks=1 00:19:14.286 00:19:14.286 ' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.287 --rc genhtml_branch_coverage=1 00:19:14.287 --rc genhtml_function_coverage=1 00:19:14.287 --rc genhtml_legend=1 00:19:14.287 --rc geninfo_all_blocks=1 00:19:14.287 --rc geninfo_unexecuted_blocks=1 00:19:14.287 00:19:14.287 ' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:14.287 17:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.560 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.560 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.560 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.560 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:19.561 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:19.561 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:19.561 Found net devices under 0000:31:00.0: cvl_0_0 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:19.561 Found net devices under 0000:31:00.1: cvl_0_1 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.561 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:19:19.819 00:19:19.819 --- 10.0.0.2 ping statistics --- 00:19:19.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.819 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:19:19.819 00:19:19.819 --- 10.0.0.1 ping statistics --- 00:19:19.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.819 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3056639 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3056639 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3056639 ']' 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.819 17:56:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.819 [2024-12-06 17:56:07.644763] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:19:19.819 [2024-12-06 17:56:07.644814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.077 [2024-12-06 17:56:07.727933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.077 [2024-12-06 17:56:07.763335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.077 [2024-12-06 17:56:07.763365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.077 [2024-12-06 17:56:07.763377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.077 [2024-12-06 17:56:07.763383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.077 [2024-12-06 17:56:07.763389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.077 [2024-12-06 17:56:07.763962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.643 [2024-12-06 17:56:08.453653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.643 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.901 Malloc0 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:20.901 [2024-12-06 17:56:08.488677] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3056927 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3056928 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3056929 00:19:20.901 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3056927 00:19:20.902 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:20.902 17:56:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:20.902 [2024-12-06 17:56:08.547249] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:20.902 [2024-12-06 17:56:08.547563] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:20.902 [2024-12-06 17:56:08.557146] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:22.276 Initializing NVMe Controllers 00:19:22.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:22.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:22.276 Initialization complete. Launching workers. 00:19:22.276 ======================================================== 00:19:22.276 Latency(us) 00:19:22.276 Device Information : IOPS MiB/s Average min max 00:19:22.276 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40903.84 40684.08 41095.42 00:19:22.276 ======================================================== 00:19:22.276 Total : 25.00 0.10 40903.84 40684.08 41095.42 00:19:22.276 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3056928 00:19:22.276 Initializing NVMe Controllers 00:19:22.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:22.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:22.276 Initialization complete. Launching workers. 00:19:22.276 ======================================================== 00:19:22.276 Latency(us) 00:19:22.276 Device Information : IOPS MiB/s Average min max 00:19:22.276 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40891.83 40599.21 40947.50 00:19:22.276 ======================================================== 00:19:22.276 Total : 25.00 0.10 40891.83 40599.21 40947.50 00:19:22.276 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3056929 00:19:22.276 Initializing NVMe Controllers 00:19:22.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:22.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:22.276 Initialization complete. Launching workers. 00:19:22.276 ======================================================== 00:19:22.276 Latency(us) 00:19:22.276 Device Information : IOPS MiB/s Average min max 00:19:22.276 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1773.00 6.93 563.80 253.50 734.45 00:19:22.276 ======================================================== 00:19:22.276 Total : 1773.00 6.93 563.80 253.50 734.45 00:19:22.276 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:22.276 rmmod nvme_tcp 00:19:22.276 rmmod nvme_fabrics 00:19:22.276 rmmod nvme_keyring 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3056639 ']' 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3056639 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3056639 ']' 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3056639 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056639 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056639' 00:19:22.276 killing process with pid 3056639 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3056639 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3056639 00:19:22.276 17:56:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:22.276 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.277 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.277 17:56:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:24.259 00:19:24.259 real 0m10.154s 00:19:24.259 user 0m7.027s 00:19:24.259 sys 0m5.064s 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:24.259 ************************************ 00:19:24.259 END TEST nvmf_control_msg_list 00:19:24.259 ************************************ 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:24.259 17:56:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:24.519 ************************************ 00:19:24.519 START TEST nvmf_wait_for_buf 00:19:24.519 ************************************ 00:19:24.519 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:24.519 * Looking for test storage... 00:19:24.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.520 --rc genhtml_branch_coverage=1 00:19:24.520 --rc genhtml_function_coverage=1 00:19:24.520 --rc genhtml_legend=1 00:19:24.520 --rc geninfo_all_blocks=1 00:19:24.520 --rc geninfo_unexecuted_blocks=1 00:19:24.520 00:19:24.520 ' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.520 --rc genhtml_branch_coverage=1 00:19:24.520 --rc genhtml_function_coverage=1 00:19:24.520 --rc genhtml_legend=1 00:19:24.520 --rc geninfo_all_blocks=1 00:19:24.520 --rc geninfo_unexecuted_blocks=1 00:19:24.520 00:19:24.520 ' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.520 --rc genhtml_branch_coverage=1 00:19:24.520 --rc genhtml_function_coverage=1 00:19:24.520 --rc genhtml_legend=1 00:19:24.520 --rc geninfo_all_blocks=1 00:19:24.520 --rc geninfo_unexecuted_blocks=1 00:19:24.520 00:19:24.520 ' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:24.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:24.520 --rc genhtml_branch_coverage=1 00:19:24.520 --rc genhtml_function_coverage=1 00:19:24.520 --rc genhtml_legend=1 00:19:24.520 --rc geninfo_all_blocks=1 00:19:24.520 --rc geninfo_unexecuted_blocks=1 00:19:24.520 00:19:24.520 ' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:24.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:24.520 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:24.521 17:56:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:29.801 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:29.801 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.801 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:29.801 Found net devices under 0000:31:00.0: cvl_0_0 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:29.802 Found net devices under 0000:31:00.1: cvl_0_1 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.802 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:30.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:19:30.061 00:19:30.061 --- 10.0.0.2 ping statistics --- 00:19:30.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.061 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:19:30.061 00:19:30.061 --- 10.0.0.1 ping statistics --- 00:19:30.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.061 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3061634 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3061634 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3061634 ']' 00:19:30.061 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.062 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.062 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.062 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.062 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.062 17:56:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:30.062 [2024-12-06 17:56:17.874536] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:19:30.062 [2024-12-06 17:56:17.874599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.322 [2024-12-06 17:56:17.962975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.322 [2024-12-06 17:56:18.002414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.322 [2024-12-06 17:56:18.002456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.322 [2024-12-06 17:56:18.002465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.322 [2024-12-06 17:56:18.002472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.322 [2024-12-06 17:56:18.002478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.322 [2024-12-06 17:56:18.003183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.892 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.152 Malloc0 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.152 [2024-12-06 17:56:18.788732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.152 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:31.153 [2024-12-06 17:56:18.812899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.153 17:56:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:31.153 [2024-12-06 17:56:18.900189] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:32.534 Initializing NVMe Controllers 00:19:32.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:32.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:32.534 Initialization complete. Launching workers. 00:19:32.534 ======================================================== 00:19:32.534 Latency(us) 00:19:32.534 Device Information : IOPS MiB/s Average min max 00:19:32.534 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.79 15.97 32493.68 8099.02 66846.61 00:19:32.534 ======================================================== 00:19:32.534 Total : 127.79 15.97 32493.68 8099.02 66846.61 00:19:32.534 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.534 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.534 rmmod nvme_tcp 00:19:32.534 rmmod nvme_fabrics 00:19:32.534 rmmod nvme_keyring 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3061634 ']' 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3061634 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3061634 ']' 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3061634 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061634 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061634' 00:19:32.793 killing process with pid 3061634 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3061634 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3061634 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.793 17:56:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:35.331 00:19:35.331 real 0m10.546s 00:19:35.331 user 0m4.394s 00:19:35.331 sys 0m4.559s 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:35.331 ************************************ 00:19:35.331 END TEST nvmf_wait_for_buf 00:19:35.331 ************************************ 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.331 17:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:40.604 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:40.604 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:40.604 Found net devices under 0000:31:00.0: cvl_0_0 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:40.604 Found net devices under 0000:31:00.1: cvl_0_1 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:40.604 ************************************ 00:19:40.604 START TEST nvmf_perf_adq 00:19:40.604 ************************************ 00:19:40.604 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:40.605 * Looking for test storage... 00:19:40.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.605 --rc genhtml_branch_coverage=1 00:19:40.605 --rc genhtml_function_coverage=1 00:19:40.605 --rc genhtml_legend=1 00:19:40.605 --rc geninfo_all_blocks=1 00:19:40.605 --rc geninfo_unexecuted_blocks=1 00:19:40.605 00:19:40.605 ' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.605 --rc genhtml_branch_coverage=1 00:19:40.605 --rc genhtml_function_coverage=1 00:19:40.605 --rc genhtml_legend=1 00:19:40.605 --rc geninfo_all_blocks=1 00:19:40.605 --rc geninfo_unexecuted_blocks=1 00:19:40.605 00:19:40.605 ' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.605 --rc genhtml_branch_coverage=1 00:19:40.605 --rc genhtml_function_coverage=1 00:19:40.605 --rc genhtml_legend=1 00:19:40.605 --rc geninfo_all_blocks=1 00:19:40.605 --rc geninfo_unexecuted_blocks=1 00:19:40.605 00:19:40.605 ' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.605 --rc genhtml_branch_coverage=1 00:19:40.605 --rc genhtml_function_coverage=1 00:19:40.605 --rc genhtml_legend=1 00:19:40.605 --rc geninfo_all_blocks=1 00:19:40.605 --rc geninfo_unexecuted_blocks=1 00:19:40.605 00:19:40.605 ' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.605 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:40.606 17:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.879 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:45.880 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:45.880 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:45.880 Found net devices under 0000:31:00.0: cvl_0_0 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:45.880 Found net devices under 0000:31:00.1: cvl_0_1 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:45.880 17:56:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:46.818 17:56:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:49.355 17:56:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.637 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:54.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:54.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:54.638 Found net devices under 0000:31:00.0: cvl_0_0 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:54.638 Found net devices under 0000:31:00.1: cvl_0_1 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:54.638 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:54.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:19:54.638 00:19:54.639 --- 10.0.0.2 ping statistics --- 00:19:54.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.639 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:19:54.639 00:19:54.639 --- 10.0.0.1 ping statistics --- 00:19:54.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.639 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3072262 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3072262 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3072262 ']' 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.639 17:56:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:54.639 [2024-12-06 17:56:41.919530] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:19:54.639 [2024-12-06 17:56:41.919580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.639 [2024-12-06 17:56:42.004371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.639 [2024-12-06 17:56:42.042197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.639 [2024-12-06 17:56:42.042232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.639 [2024-12-06 17:56:42.042240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.639 [2024-12-06 17:56:42.042247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.639 [2024-12-06 17:56:42.042253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.639 [2024-12-06 17:56:42.043748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.639 [2024-12-06 17:56:42.043903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.639 [2024-12-06 17:56:42.044052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.639 [2024-12-06 17:56:42.044052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.899 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.899 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:54.899 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.899 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.899 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 [2024-12-06 17:56:42.838902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 Malloc1 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.160 [2024-12-06 17:56:42.889916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3072615 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:55.160 17:56:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:57.699 "tick_rate": 2400000000, 00:19:57.699 "poll_groups": [ 00:19:57.699 { 00:19:57.699 "name": "nvmf_tgt_poll_group_000", 00:19:57.699 "admin_qpairs": 1, 00:19:57.699 "io_qpairs": 1, 00:19:57.699 "current_admin_qpairs": 1, 00:19:57.699 "current_io_qpairs": 1, 00:19:57.699 "pending_bdev_io": 0, 00:19:57.699 "completed_nvme_io": 27815, 00:19:57.699 "transports": [ 00:19:57.699 { 00:19:57.699 "trtype": "TCP" 00:19:57.699 } 00:19:57.699 ] 00:19:57.699 }, 00:19:57.699 { 00:19:57.699 "name": "nvmf_tgt_poll_group_001", 00:19:57.699 "admin_qpairs": 0, 00:19:57.699 "io_qpairs": 1, 00:19:57.699 "current_admin_qpairs": 0, 00:19:57.699 "current_io_qpairs": 1, 00:19:57.699 "pending_bdev_io": 0, 00:19:57.699 "completed_nvme_io": 28684, 00:19:57.699 "transports": [ 00:19:57.699 { 00:19:57.699 "trtype": "TCP" 00:19:57.699 } 00:19:57.699 ] 00:19:57.699 }, 00:19:57.699 { 00:19:57.699 "name": "nvmf_tgt_poll_group_002", 00:19:57.699 "admin_qpairs": 0, 00:19:57.699 "io_qpairs": 1, 00:19:57.699 "current_admin_qpairs": 0, 00:19:57.699 "current_io_qpairs": 1, 00:19:57.699 "pending_bdev_io": 0, 00:19:57.699 "completed_nvme_io": 27273, 00:19:57.699 "transports": [ 00:19:57.699 { 00:19:57.699 "trtype": "TCP" 00:19:57.699 } 00:19:57.699 ] 00:19:57.699 }, 00:19:57.699 { 00:19:57.699 "name": "nvmf_tgt_poll_group_003", 00:19:57.699 "admin_qpairs": 0, 00:19:57.699 "io_qpairs": 1, 00:19:57.699 "current_admin_qpairs": 0, 00:19:57.699 "current_io_qpairs": 1, 00:19:57.699 "pending_bdev_io": 0, 00:19:57.699 "completed_nvme_io": 24572, 00:19:57.699 "transports": [ 00:19:57.699 { 00:19:57.699 "trtype": "TCP" 00:19:57.699 } 00:19:57.699 ] 00:19:57.699 } 00:19:57.699 ] 00:19:57.699 }' 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:57.699 17:56:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3072615 00:20:05.826 Initializing NVMe Controllers 00:20:05.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:05.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:05.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:05.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:05.826 Initialization complete. Launching workers. 00:20:05.826 ======================================================== 00:20:05.826 Latency(us) 00:20:05.826 Device Information : IOPS MiB/s Average min max 00:20:05.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14122.19 55.16 4532.12 1037.97 7406.27 00:20:05.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14652.39 57.24 4368.64 1082.53 7556.94 00:20:05.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13945.69 54.48 4589.91 1008.45 9299.35 00:20:05.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14137.59 55.22 4527.74 1157.78 7511.56 00:20:05.826 ======================================================== 00:20:05.826 Total : 56857.88 222.10 4503.08 1008.45 9299.35 00:20:05.826 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.826 17:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.826 rmmod nvme_tcp 00:20:05.826 rmmod nvme_fabrics 00:20:05.826 rmmod nvme_keyring 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3072262 ']' 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3072262 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3072262 ']' 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3072262 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072262 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072262' 00:20:05.826 killing process with pid 3072262 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3072262 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3072262 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.826 17:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.733 17:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:07.733 17:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:07.733 17:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:07.733 17:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:09.113 17:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:11.023 17:56:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:16.311 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:16.311 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:16.311 Found net devices under 0000:31:00.0: cvl_0_0 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:16.311 Found net devices under 0000:31:00.1: cvl_0_1 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:16.311 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:16.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:20:16.312 00:20:16.312 --- 10.0.0.2 ping statistics --- 00:20:16.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.312 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:16.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:20:16.312 00:20:16.312 --- 10.0.0.1 ping statistics --- 00:20:16.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.312 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:16.312 17:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:16.312 net.core.busy_poll = 1 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:16.312 net.core.busy_read = 1 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:16.312 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3077825 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3077825 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3077825 ']' 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.572 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.573 17:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:16.573 [2024-12-06 17:57:04.247971] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:16.573 [2024-12-06 17:57:04.248021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.573 [2024-12-06 17:57:04.336331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.573 [2024-12-06 17:57:04.372044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.573 [2024-12-06 17:57:04.372080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.573 [2024-12-06 17:57:04.372088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.573 [2024-12-06 17:57:04.372095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.573 [2024-12-06 17:57:04.372107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.573 [2024-12-06 17:57:04.373515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.573 [2024-12-06 17:57:04.373608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.573 [2024-12-06 17:57:04.373764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.573 [2024-12-06 17:57:04.373765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 [2024-12-06 17:57:05.152551] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 Malloc1 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.511 [2024-12-06 17:57:05.209750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3078128 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:17.511 17:57:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:19.413 "tick_rate": 2400000000, 00:20:19.413 "poll_groups": [ 00:20:19.413 { 00:20:19.413 "name": "nvmf_tgt_poll_group_000", 00:20:19.413 "admin_qpairs": 1, 00:20:19.413 "io_qpairs": 2, 00:20:19.413 "current_admin_qpairs": 1, 00:20:19.413 "current_io_qpairs": 2, 00:20:19.413 "pending_bdev_io": 0, 00:20:19.413 "completed_nvme_io": 34691, 00:20:19.413 "transports": [ 00:20:19.413 { 00:20:19.413 "trtype": "TCP" 00:20:19.413 } 00:20:19.413 ] 00:20:19.413 }, 00:20:19.413 { 00:20:19.413 "name": "nvmf_tgt_poll_group_001", 00:20:19.413 "admin_qpairs": 0, 00:20:19.413 "io_qpairs": 2, 00:20:19.413 "current_admin_qpairs": 0, 00:20:19.413 "current_io_qpairs": 2, 00:20:19.413 "pending_bdev_io": 0, 00:20:19.413 "completed_nvme_io": 37734, 00:20:19.413 "transports": [ 00:20:19.413 { 00:20:19.413 "trtype": "TCP" 00:20:19.413 } 00:20:19.413 ] 00:20:19.413 }, 00:20:19.413 { 00:20:19.413 "name": "nvmf_tgt_poll_group_002", 00:20:19.413 "admin_qpairs": 0, 00:20:19.413 "io_qpairs": 0, 00:20:19.413 "current_admin_qpairs": 0, 00:20:19.413 "current_io_qpairs": 0, 00:20:19.413 "pending_bdev_io": 0, 00:20:19.413 "completed_nvme_io": 0, 00:20:19.413 "transports": [ 00:20:19.413 { 00:20:19.413 "trtype": "TCP" 00:20:19.413 } 00:20:19.413 ] 00:20:19.413 }, 00:20:19.413 { 00:20:19.413 "name": "nvmf_tgt_poll_group_003", 00:20:19.413 "admin_qpairs": 0, 00:20:19.413 "io_qpairs": 0, 00:20:19.413 "current_admin_qpairs": 0, 00:20:19.413 "current_io_qpairs": 0, 00:20:19.413 "pending_bdev_io": 0, 00:20:19.413 "completed_nvme_io": 0, 00:20:19.413 "transports": [ 00:20:19.413 { 00:20:19.413 "trtype": "TCP" 00:20:19.413 } 00:20:19.413 ] 00:20:19.413 } 00:20:19.413 ] 00:20:19.413 }' 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:19.413 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:19.671 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:19.671 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:19.671 17:57:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3078128 00:20:27.789 Initializing NVMe Controllers 00:20:27.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:27.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:27.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:27.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:27.789 Initialization complete. Launching workers. 00:20:27.790 ======================================================== 00:20:27.790 Latency(us) 00:20:27.790 Device Information : IOPS MiB/s Average min max 00:20:27.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7888.50 30.81 8138.54 954.47 50214.73 00:20:27.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8980.20 35.08 7128.41 865.41 50820.43 00:20:27.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11000.80 42.97 5818.82 1134.12 49809.27 00:20:27.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9063.00 35.40 7063.51 1225.03 50366.37 00:20:27.790 ======================================================== 00:20:27.790 Total : 36932.49 144.27 6938.16 865.41 50820.43 00:20:27.790 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:27.790 rmmod nvme_tcp 00:20:27.790 rmmod nvme_fabrics 00:20:27.790 rmmod nvme_keyring 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3077825 ']' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3077825 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3077825 ']' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3077825 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3077825 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3077825' 00:20:27.790 killing process with pid 3077825 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3077825 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3077825 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.790 17:57:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:31.082 00:20:31.082 real 0m50.857s 00:20:31.082 user 2m47.915s 00:20:31.082 sys 0m9.108s 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.082 ************************************ 00:20:31.082 END TEST nvmf_perf_adq 00:20:31.082 ************************************ 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:31.082 ************************************ 00:20:31.082 START TEST nvmf_shutdown 00:20:31.082 ************************************ 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:31.082 * Looking for test storage... 00:20:31.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:31.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.082 --rc genhtml_branch_coverage=1 00:20:31.082 --rc genhtml_function_coverage=1 00:20:31.082 --rc genhtml_legend=1 00:20:31.082 --rc geninfo_all_blocks=1 00:20:31.082 --rc geninfo_unexecuted_blocks=1 00:20:31.082 00:20:31.082 ' 00:20:31.082 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:31.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.083 --rc genhtml_branch_coverage=1 00:20:31.083 --rc genhtml_function_coverage=1 00:20:31.083 --rc genhtml_legend=1 00:20:31.083 --rc geninfo_all_blocks=1 00:20:31.083 --rc geninfo_unexecuted_blocks=1 00:20:31.083 00:20:31.083 ' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:31.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.083 --rc genhtml_branch_coverage=1 00:20:31.083 --rc genhtml_function_coverage=1 00:20:31.083 --rc genhtml_legend=1 00:20:31.083 --rc geninfo_all_blocks=1 00:20:31.083 --rc geninfo_unexecuted_blocks=1 00:20:31.083 00:20:31.083 ' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:31.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.083 --rc genhtml_branch_coverage=1 00:20:31.083 --rc genhtml_function_coverage=1 00:20:31.083 --rc genhtml_legend=1 00:20:31.083 --rc geninfo_all_blocks=1 00:20:31.083 --rc geninfo_unexecuted_blocks=1 00:20:31.083 00:20:31.083 ' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:31.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:31.083 ************************************ 00:20:31.083 START TEST nvmf_shutdown_tc1 00:20:31.083 ************************************ 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:31.083 17:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.363 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:36.364 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:36.364 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:36.364 Found net devices under 0000:31:00.0: cvl_0_0 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:36.364 Found net devices under 0000:31:00.1: cvl_0_1 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.364 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:20:36.625 00:20:36.625 --- 10.0.0.2 ping statistics --- 00:20:36.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.625 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:20:36.625 00:20:36.625 --- 10.0.0.1 ping statistics --- 00:20:36.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.625 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3085426 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3085426 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3085426 ']' 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:36.625 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.625 [2024-12-06 17:57:24.347905] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:36.625 [2024-12-06 17:57:24.347964] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.625 [2024-12-06 17:57:24.422325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.885 [2024-12-06 17:57:24.452060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.885 [2024-12-06 17:57:24.452089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.885 [2024-12-06 17:57:24.452094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.885 [2024-12-06 17:57:24.452104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.885 [2024-12-06 17:57:24.452108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.885 [2024-12-06 17:57:24.453405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.885 [2024-12-06 17:57:24.453561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.885 [2024-12-06 17:57:24.453837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.885 [2024-12-06 17:57:24.453837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.885 [2024-12-06 17:57:24.561387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.885 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:36.886 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:36.886 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.886 17:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:36.886 Malloc1 00:20:36.886 [2024-12-06 17:57:24.650093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.886 Malloc2 00:20:36.886 Malloc3 00:20:37.146 Malloc4 00:20:37.146 Malloc5 00:20:37.146 Malloc6 00:20:37.146 Malloc7 00:20:37.146 Malloc8 00:20:37.146 Malloc9 00:20:37.405 Malloc10 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3085636 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3085636 /var/tmp/bdevperf.sock 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3085636 ']' 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:37.405 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 [2024-12-06 17:57:25.064124] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:37.406 [2024-12-06 17:57:25.064176] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.406 { 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme$subsystem", 00:20:37.406 "trtype": "$TEST_TRANSPORT", 00:20:37.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.406 "adrfam": "ipv4", 00:20:37.406 "trsvcid": "$NVMF_PORT", 00:20:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.406 "hdgst": ${hdgst:-false}, 00:20:37.406 "ddgst": ${ddgst:-false} 00:20:37.406 }, 00:20:37.406 "method": "bdev_nvme_attach_controller" 00:20:37.406 } 00:20:37.406 EOF 00:20:37.406 )") 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:37.406 17:57:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.406 "params": { 00:20:37.406 "name": "Nvme1", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme2", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme3", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme4", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme5", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme6", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme7", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme8", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme9", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 },{ 00:20:37.407 "params": { 00:20:37.407 "name": "Nvme10", 00:20:37.407 "trtype": "tcp", 00:20:37.407 "traddr": "10.0.0.2", 00:20:37.407 "adrfam": "ipv4", 00:20:37.407 "trsvcid": "4420", 00:20:37.407 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:37.407 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:37.407 "hdgst": false, 00:20:37.407 "ddgst": false 00:20:37.407 }, 00:20:37.407 "method": "bdev_nvme_attach_controller" 00:20:37.407 }' 00:20:37.407 [2024-12-06 17:57:25.129583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.407 [2024-12-06 17:57:25.159940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3085636 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:39.312 17:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:40.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3085636 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:40.247 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3085426 00:20:40.247 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:40.247 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:40.247 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:40.247 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 [2024-12-06 17:57:27.942077] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:40.248 [2024-12-06 17:57:27.942143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086216 ] 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.248 { 00:20:40.248 "params": { 00:20:40.248 "name": "Nvme$subsystem", 00:20:40.248 "trtype": "$TEST_TRANSPORT", 00:20:40.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.248 "adrfam": "ipv4", 00:20:40.248 "trsvcid": "$NVMF_PORT", 00:20:40.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.248 "hdgst": ${hdgst:-false}, 00:20:40.248 "ddgst": ${ddgst:-false} 00:20:40.248 }, 00:20:40.248 "method": "bdev_nvme_attach_controller" 00:20:40.248 } 00:20:40.248 EOF 00:20:40.248 )") 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:40.248 17:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:40.248 "params": { 00:20:40.249 "name": "Nvme1", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme2", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme3", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme4", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme5", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme6", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme7", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme8", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme9", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 },{ 00:20:40.249 "params": { 00:20:40.249 "name": "Nvme10", 00:20:40.249 "trtype": "tcp", 00:20:40.249 "traddr": "10.0.0.2", 00:20:40.249 "adrfam": "ipv4", 00:20:40.249 "trsvcid": "4420", 00:20:40.249 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:40.249 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:40.249 "hdgst": false, 00:20:40.249 "ddgst": false 00:20:40.249 }, 00:20:40.249 "method": "bdev_nvme_attach_controller" 00:20:40.249 }' 00:20:40.249 [2024-12-06 17:57:28.022542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.249 [2024-12-06 17:57:28.058665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.650 Running I/O for 1 seconds... 00:20:42.849 2376.00 IOPS, 148.50 MiB/s 00:20:42.849 Latency(us) 00:20:42.849 [2024-12-06T16:57:30.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme1n1 : 1.14 281.05 17.57 0.00 0.00 225468.07 14199.47 209715.20 00:20:42.849 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme2n1 : 1.13 282.07 17.63 0.00 0.00 220763.48 16384.00 194860.37 00:20:42.849 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme3n1 : 1.14 281.91 17.62 0.00 0.00 216656.21 17257.81 193112.75 00:20:42.849 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme4n1 : 1.16 330.63 20.66 0.00 0.00 181061.72 3768.32 214958.08 00:20:42.849 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme5n1 : 1.14 279.79 17.49 0.00 0.00 210954.92 14090.24 222822.40 00:20:42.849 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme6n1 : 1.17 329.44 20.59 0.00 0.00 176021.76 14199.47 209715.20 00:20:42.849 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme7n1 : 1.18 284.67 17.79 0.00 0.00 190697.20 10868.05 187869.87 00:20:42.849 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme8n1 : 1.18 326.66 20.42 0.00 0.00 171376.21 15837.87 193986.56 00:20:42.849 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme9n1 : 1.17 273.83 17.11 0.00 0.00 200394.07 13981.01 246415.36 00:20:42.849 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:42.849 Verification LBA range: start 0x0 length 0x400 00:20:42.849 Nvme10n1 : 1.18 324.55 20.28 0.00 0.00 166273.71 9721.17 223696.21 00:20:42.849 [2024-12-06T16:57:30.676Z] =================================================================================================================== 00:20:42.849 [2024-12-06T16:57:30.676Z] Total : 2994.59 187.16 0.00 0.00 194298.40 3768.32 246415.36 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.110 rmmod nvme_tcp 00:20:43.110 rmmod nvme_fabrics 00:20:43.110 rmmod nvme_keyring 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3085426 ']' 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3085426 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3085426 ']' 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3085426 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3085426 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3085426' 00:20:43.110 killing process with pid 3085426 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3085426 00:20:43.110 17:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3085426 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.370 17:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:45.277 00:20:45.277 real 0m14.189s 00:20:45.277 user 0m31.658s 00:20:45.277 sys 0m5.187s 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.277 ************************************ 00:20:45.277 END TEST nvmf_shutdown_tc1 00:20:45.277 ************************************ 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.277 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:45.536 ************************************ 00:20:45.536 START TEST nvmf_shutdown_tc2 00:20:45.536 ************************************ 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:45.536 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:45.537 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:45.537 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:45.537 Found net devices under 0000:31:00.0: cvl_0_0 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:45.537 Found net devices under 0000:31:00.1: cvl_0_1 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.537 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:20:45.538 00:20:45.538 --- 10.0.0.2 ping statistics --- 00:20:45.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.538 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:20:45.538 00:20:45.538 --- 10.0.0.1 ping statistics --- 00:20:45.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.538 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.538 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3087603 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3087603 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3087603 ']' 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:45.798 [2024-12-06 17:57:33.418167] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:45.798 [2024-12-06 17:57:33.418204] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.798 [2024-12-06 17:57:33.479695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.798 [2024-12-06 17:57:33.509384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.798 [2024-12-06 17:57:33.509410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.798 [2024-12-06 17:57:33.509416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.798 [2024-12-06 17:57:33.509421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.798 [2024-12-06 17:57:33.509425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.798 [2024-12-06 17:57:33.510900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.798 [2024-12-06 17:57:33.511050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.798 [2024-12-06 17:57:33.511201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:45.798 [2024-12-06 17:57:33.511356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:45.798 [2024-12-06 17:57:33.614868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.798 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.058 17:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.058 Malloc1 00:20:46.058 [2024-12-06 17:57:33.701794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.058 Malloc2 00:20:46.058 Malloc3 00:20:46.058 Malloc4 00:20:46.058 Malloc5 00:20:46.058 Malloc6 00:20:46.317 Malloc7 00:20:46.317 Malloc8 00:20:46.317 Malloc9 00:20:46.317 Malloc10 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3087773 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3087773 /var/tmp/bdevperf.sock 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3087773 ']' 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.317 { 00:20:46.317 "params": { 00:20:46.317 "name": "Nvme$subsystem", 00:20:46.317 "trtype": "$TEST_TRANSPORT", 00:20:46.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.317 "adrfam": "ipv4", 00:20:46.317 "trsvcid": "$NVMF_PORT", 00:20:46.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.317 "hdgst": ${hdgst:-false}, 00:20:46.317 "ddgst": ${ddgst:-false} 00:20:46.317 }, 00:20:46.317 "method": "bdev_nvme_attach_controller" 00:20:46.317 } 00:20:46.317 EOF 00:20:46.317 )") 00:20:46.317 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.318 [2024-12-06 17:57:34.116675] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:46.318 [2024-12-06 17:57:34.116731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087773 ] 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.318 { 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme$subsystem", 00:20:46.318 "trtype": "$TEST_TRANSPORT", 00:20:46.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "$NVMF_PORT", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.318 "hdgst": ${hdgst:-false}, 00:20:46.318 "ddgst": ${ddgst:-false} 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 } 00:20:46.318 EOF 00:20:46.318 )") 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.318 { 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme$subsystem", 00:20:46.318 "trtype": "$TEST_TRANSPORT", 00:20:46.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "$NVMF_PORT", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.318 "hdgst": ${hdgst:-false}, 00:20:46.318 "ddgst": ${ddgst:-false} 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 } 00:20:46.318 EOF 00:20:46.318 )") 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.318 { 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme$subsystem", 00:20:46.318 "trtype": "$TEST_TRANSPORT", 00:20:46.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "$NVMF_PORT", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.318 "hdgst": ${hdgst:-false}, 00:20:46.318 "ddgst": ${ddgst:-false} 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 } 00:20:46.318 EOF 00:20:46.318 )") 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:46.318 17:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme1", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme2", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme3", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme4", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme5", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme6", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme7", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme8", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme9", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 },{ 00:20:46.318 "params": { 00:20:46.318 "name": "Nvme10", 00:20:46.318 "trtype": "tcp", 00:20:46.318 "traddr": "10.0.0.2", 00:20:46.318 "adrfam": "ipv4", 00:20:46.318 "trsvcid": "4420", 00:20:46.318 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.318 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.318 "hdgst": false, 00:20:46.318 "ddgst": false 00:20:46.318 }, 00:20:46.318 "method": "bdev_nvme_attach_controller" 00:20:46.318 }' 00:20:46.577 [2024-12-06 17:57:34.182551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.577 [2024-12-06 17:57:34.213096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.479 Running I/O for 10 seconds... 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:48.479 17:57:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:48.479 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3087773 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3087773 ']' 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3087773 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.480 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3087773 00:20:48.739 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.739 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.739 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3087773' 00:20:48.739 killing process with pid 3087773 00:20:48.739 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3087773 00:20:48.739 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3087773 00:20:48.739 Received shutdown signal, test time was about 0.608887 seconds 00:20:48.739 00:20:48.739 Latency(us) 00:20:48.739 [2024-12-06T16:57:36.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.739 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.739 Verification LBA range: start 0x0 length 0x400 00:20:48.739 Nvme1n1 : 0.57 337.75 21.11 0.00 0.00 186954.24 15837.87 187869.87 00:20:48.739 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.739 Verification LBA range: start 0x0 length 0x400 00:20:48.739 Nvme2n1 : 0.58 331.87 20.74 0.00 0.00 185682.49 14527.15 176510.29 00:20:48.739 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme3n1 : 0.61 322.22 20.14 0.00 0.00 174452.26 11414.19 160781.65 00:20:48.740 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme4n1 : 0.55 351.01 21.94 0.00 0.00 165894.69 2430.29 169519.79 00:20:48.740 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme5n1 : 0.56 344.97 21.56 0.00 0.00 165315.41 14199.47 166898.35 00:20:48.740 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme6n1 : 0.56 352.09 22.01 0.00 0.00 155195.05 2307.41 149422.08 00:20:48.740 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme7n1 : 0.57 335.60 20.97 0.00 0.00 161871.08 16493.23 172141.23 00:20:48.740 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme8n1 : 0.56 342.16 21.39 0.00 0.00 153872.78 18677.76 181753.17 00:20:48.740 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme9n1 : 0.58 333.00 20.81 0.00 0.00 154578.49 14745.60 192238.93 00:20:48.740 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:48.740 Verification LBA range: start 0x0 length 0x400 00:20:48.740 Nvme10n1 : 0.58 330.75 20.67 0.00 0.00 151603.48 14745.60 180879.36 00:20:48.740 [2024-12-06T16:57:36.567Z] =================================================================================================================== 00:20:48.740 [2024-12-06T16:57:36.567Z] Total : 3381.43 211.34 0.00 0.00 165534.03 2307.41 192238.93 00:20:48.740 17:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3087603 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.719 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:49.720 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.720 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.720 rmmod nvme_tcp 00:20:49.978 rmmod nvme_fabrics 00:20:49.978 rmmod nvme_keyring 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3087603 ']' 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3087603 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3087603 ']' 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3087603 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3087603 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3087603' 00:20:49.978 killing process with pid 3087603 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3087603 00:20:49.978 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3087603 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.237 17:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.140 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:52.140 00:20:52.140 real 0m6.798s 00:20:52.140 user 0m19.695s 00:20:52.140 sys 0m0.970s 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.141 ************************************ 00:20:52.141 END TEST nvmf_shutdown_tc2 00:20:52.141 ************************************ 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:52.141 ************************************ 00:20:52.141 START TEST nvmf_shutdown_tc3 00:20:52.141 ************************************ 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:52.141 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:52.141 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:52.141 Found net devices under 0000:31:00.0: cvl_0_0 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:52.141 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:52.142 Found net devices under 0000:31:00.1: cvl_0_1 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.142 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:52.400 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:52.400 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.400 17:57:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.400 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.400 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:52.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:20:52.401 00:20:52.401 --- 10.0.0.2 ping statistics --- 00:20:52.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.401 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:20:52.401 00:20:52.401 --- 10.0.0.1 ping statistics --- 00:20:52.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.401 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3089179 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3089179 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3089179 ']' 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.401 17:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:52.659 [2024-12-06 17:57:40.254218] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:52.659 [2024-12-06 17:57:40.254271] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.659 [2024-12-06 17:57:40.327071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.659 [2024-12-06 17:57:40.357116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.659 [2024-12-06 17:57:40.357144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.659 [2024-12-06 17:57:40.357150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.659 [2024-12-06 17:57:40.357155] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.659 [2024-12-06 17:57:40.357159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.660 [2024-12-06 17:57:40.358425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.660 [2024-12-06 17:57:40.358578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.660 [2024-12-06 17:57:40.358731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.660 [2024-12-06 17:57:40.358733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.227 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.227 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:53.227 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:53.227 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.227 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.487 [2024-12-06 17:57:41.060423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.487 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.487 Malloc1 00:20:53.487 [2024-12-06 17:57:41.145671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.487 Malloc2 00:20:53.487 Malloc3 00:20:53.487 Malloc4 00:20:53.487 Malloc5 00:20:53.758 Malloc6 00:20:53.758 Malloc7 00:20:53.758 Malloc8 00:20:53.758 Malloc9 00:20:53.758 Malloc10 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3089501 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3089501 /var/tmp/bdevperf.sock 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3089501 ']' 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.758 { 00:20:53.758 "params": { 00:20:53.758 "name": "Nvme$subsystem", 00:20:53.758 "trtype": "$TEST_TRANSPORT", 00:20:53.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.758 "adrfam": "ipv4", 00:20:53.758 "trsvcid": "$NVMF_PORT", 00:20:53.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.758 "hdgst": ${hdgst:-false}, 00:20:53.758 "ddgst": ${ddgst:-false} 00:20:53.758 }, 00:20:53.758 "method": "bdev_nvme_attach_controller" 00:20:53.758 } 00:20:53.758 EOF 00:20:53.758 )") 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.758 { 00:20:53.758 "params": { 00:20:53.758 "name": "Nvme$subsystem", 00:20:53.758 "trtype": "$TEST_TRANSPORT", 00:20:53.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.758 "adrfam": "ipv4", 00:20:53.758 "trsvcid": "$NVMF_PORT", 00:20:53.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.758 "hdgst": ${hdgst:-false}, 00:20:53.758 "ddgst": ${ddgst:-false} 00:20:53.758 }, 00:20:53.758 "method": "bdev_nvme_attach_controller" 00:20:53.758 } 00:20:53.758 EOF 00:20:53.758 )") 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.758 { 00:20:53.758 "params": { 00:20:53.758 "name": "Nvme$subsystem", 00:20:53.758 "trtype": "$TEST_TRANSPORT", 00:20:53.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.758 "adrfam": "ipv4", 00:20:53.758 "trsvcid": "$NVMF_PORT", 00:20:53.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.758 "hdgst": ${hdgst:-false}, 00:20:53.758 "ddgst": ${ddgst:-false} 00:20:53.758 }, 00:20:53.758 "method": "bdev_nvme_attach_controller" 00:20:53.758 } 00:20:53.758 EOF 00:20:53.758 )") 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.758 { 00:20:53.758 "params": { 00:20:53.758 "name": "Nvme$subsystem", 00:20:53.758 "trtype": "$TEST_TRANSPORT", 00:20:53.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.758 "adrfam": "ipv4", 00:20:53.758 "trsvcid": "$NVMF_PORT", 00:20:53.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.758 "hdgst": ${hdgst:-false}, 00:20:53.758 "ddgst": ${ddgst:-false} 00:20:53.758 }, 00:20:53.758 "method": "bdev_nvme_attach_controller" 00:20:53.758 } 00:20:53.758 EOF 00:20:53.758 )") 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.758 { 00:20:53.758 "params": { 00:20:53.758 "name": "Nvme$subsystem", 00:20:53.758 "trtype": "$TEST_TRANSPORT", 00:20:53.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.758 "adrfam": "ipv4", 00:20:53.758 "trsvcid": "$NVMF_PORT", 00:20:53.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.758 "hdgst": ${hdgst:-false}, 00:20:53.758 "ddgst": ${ddgst:-false} 00:20:53.758 }, 00:20:53.758 "method": "bdev_nvme_attach_controller" 00:20:53.758 } 00:20:53.758 EOF 00:20:53.758 )") 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.758 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.758 { 00:20:53.758 "params": { 00:20:53.758 "name": "Nvme$subsystem", 00:20:53.759 "trtype": "$TEST_TRANSPORT", 00:20:53.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "$NVMF_PORT", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.759 "hdgst": ${hdgst:-false}, 00:20:53.759 "ddgst": ${ddgst:-false} 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 } 00:20:53.759 EOF 00:20:53.759 )") 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.759 { 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme$subsystem", 00:20:53.759 "trtype": "$TEST_TRANSPORT", 00:20:53.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "$NVMF_PORT", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.759 "hdgst": ${hdgst:-false}, 00:20:53.759 "ddgst": ${ddgst:-false} 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 } 00:20:53.759 EOF 00:20:53.759 )") 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.759 [2024-12-06 17:57:41.557756] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:20:53.759 [2024-12-06 17:57:41.557809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089501 ] 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.759 { 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme$subsystem", 00:20:53.759 "trtype": "$TEST_TRANSPORT", 00:20:53.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "$NVMF_PORT", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.759 "hdgst": ${hdgst:-false}, 00:20:53.759 "ddgst": ${ddgst:-false} 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 } 00:20:53.759 EOF 00:20:53.759 )") 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.759 { 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme$subsystem", 00:20:53.759 "trtype": "$TEST_TRANSPORT", 00:20:53.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "$NVMF_PORT", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.759 "hdgst": ${hdgst:-false}, 00:20:53.759 "ddgst": ${ddgst:-false} 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 } 00:20:53.759 EOF 00:20:53.759 )") 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:53.759 { 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme$subsystem", 00:20:53.759 "trtype": "$TEST_TRANSPORT", 00:20:53.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "$NVMF_PORT", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.759 "hdgst": ${hdgst:-false}, 00:20:53.759 "ddgst": ${ddgst:-false} 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 } 00:20:53.759 EOF 00:20:53.759 )") 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:53.759 17:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme1", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme2", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme3", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme4", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme5", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme6", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme7", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme8", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme9", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 },{ 00:20:53.759 "params": { 00:20:53.759 "name": "Nvme10", 00:20:53.759 "trtype": "tcp", 00:20:53.759 "traddr": "10.0.0.2", 00:20:53.759 "adrfam": "ipv4", 00:20:53.759 "trsvcid": "4420", 00:20:53.759 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:53.759 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:53.759 "hdgst": false, 00:20:53.759 "ddgst": false 00:20:53.759 }, 00:20:53.759 "method": "bdev_nvme_attach_controller" 00:20:53.759 }' 00:20:54.019 [2024-12-06 17:57:41.623312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.019 [2024-12-06 17:57:41.654317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.395 Running I/O for 10 seconds... 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:55.655 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:55.914 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=199 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 199 -ge 100 ']' 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3089179 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3089179 ']' 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3089179 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3089179 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3089179' 00:20:56.173 killing process with pid 3089179 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3089179 00:20:56.173 17:57:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3089179 00:20:56.476 [2024-12-06 17:57:44.000470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.476 [2024-12-06 17:57:44.000768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.000834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ac90 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.001998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.477 [2024-12-06 17:57:44.002185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.002246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2092260 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.478 [2024-12-06 17:57:44.003632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.003679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b180 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.004207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171960 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.004314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.479 [2024-12-06 17:57:44.004358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.479 [2024-12-06 17:57:44.004362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6b10 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.479 [2024-12-06 17:57:44.006409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006466] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.480 [2024-12-06 17:57:44.006478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006512] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.480 [2024-12-06 17:57:44.006518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.006541] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.480 [2024-12-06 17:57:44.006546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201b650 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.007779] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.480 [2024-12-06 17:57:44.009250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.480 [2024-12-06 17:57:44.009439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.009572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201bb40 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.010355] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.481 [2024-12-06 17:57:44.012366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.481 [2024-12-06 17:57:44.012529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.012644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c010 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.482 [2024-12-06 17:57:44.013561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c390 is same with the state(6) to be set 00:20:56.483 [2024-12-06 17:57:44.013683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.483 [2024-12-06 17:57:44.013961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.483 [2024-12-06 17:57:44.013968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.013974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.013984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.013989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.013996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.484 [2024-12-06 17:57:44.014369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.484 [2024-12-06 17:57:44.014375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.485 [2024-12-06 17:57:44.014499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10feea0 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c710 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201c710 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.014899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.014913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.014925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.014936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with [2024-12-06 17:57:44.014941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc12610 is same wthe state(6) to be set 00:20:56.485 ith the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with [2024-12-06 17:57:44.014971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:20:56.485 id:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.014979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.014989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.014994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.014999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.015000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 17:57:44.015005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with [2024-12-06 17:57:44.015013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:20:56.485 id:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.015019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.015025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbca0 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.485 [2024-12-06 17:57:44.015045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.485 [2024-12-06 17:57:44.015051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.485 [2024-12-06 17:57:44.015056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-06 17:57:44.015056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:56.485 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 17:57:44.015064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with [2024-12-06 17:57:44.015072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:20:56.486 id:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1123ab0 is same [2024-12-06 17:57:44.015099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with with the state(6) to be set 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1171960 (9): Bad file descriptor 00:20:56.486 [2024-12-06 17:57:44.015122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 17:57:44.015142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 17:57:44.015148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-06 17:57:44.015183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 17:57:44.015190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116a100 is same [2024-12-06 17:57:44.015197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with with the state(6) to be set 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6b10 (9): Bad file descriptor 00:20:56.486 [2024-12-06 17:57:44.015216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 17:57:44.015231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with [2024-12-06 17:57:44.015238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:56.486 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 17:57:44.015267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.486 [2024-12-06 17:57:44.015284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4960 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.486 [2024-12-06 17:57:44.015304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.486 [2024-12-06 17:57:44.015305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201cc00 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3430 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.487 [2024-12-06 17:57:44.015404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.487 [2024-12-06 17:57:44.015410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfcdc0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.487 [2024-12-06 17:57:44.015944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.015998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201d0d0 is same with the state(6) to be set 00:20:56.488 [2024-12-06 17:57:44.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.488 [2024-12-06 17:57:44.016595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.488 [2024-12-06 17:57:44.016600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.489 [2024-12-06 17:57:44.016966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.489 [2024-12-06 17:57:44.016971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.016978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.016983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.016990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.016995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.017128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.017240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:56.490 [2024-12-06 17:57:44.017259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfbca0 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.018491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:56.490 [2024-12-06 17:57:44.018511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12610 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.018979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.490 [2024-12-06 17:57:44.018991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfbca0 with addr=10.0.0.2, port=4420 00:20:56.490 [2024-12-06 17:57:44.018997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbca0 is same with the state(6) to be set 00:20:56.490 [2024-12-06 17:57:44.019042] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.490 [2024-12-06 17:57:44.019303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfbca0 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.019364] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.490 [2024-12-06 17:57:44.019390] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:56.490 [2024-12-06 17:57:44.019558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.490 [2024-12-06 17:57:44.019567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12610 with addr=10.0.0.2, port=4420 00:20:56.490 [2024-12-06 17:57:44.019573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc12610 is same with the state(6) to be set 00:20:56.490 [2024-12-06 17:57:44.019579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:56.490 [2024-12-06 17:57:44.019584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:56.490 [2024-12-06 17:57:44.019591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:56.490 [2024-12-06 17:57:44.019598] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:56.490 [2024-12-06 17:57:44.019640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12610 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.019670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:56.490 [2024-12-06 17:57:44.019675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:56.490 [2024-12-06 17:57:44.019680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:56.490 [2024-12-06 17:57:44.019684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:56.490 [2024-12-06 17:57:44.024909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.490 [2024-12-06 17:57:44.024921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.024932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.490 [2024-12-06 17:57:44.024937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.024943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.490 [2024-12-06 17:57:44.024948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.024954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:56.490 [2024-12-06 17:57:44.024959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.024964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116fa30 is same with the state(6) to be set 00:20:56.490 [2024-12-06 17:57:44.024979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1123ab0 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.024996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116a100 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.025013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4960 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.025025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3430 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.025035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfcdc0 (9): Bad file descriptor 00:20:56.490 [2024-12-06 17:57:44.025118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.025125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.490 [2024-12-06 17:57:44.025135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.490 [2024-12-06 17:57:44.025140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.491 [2024-12-06 17:57:44.025514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.491 [2024-12-06 17:57:44.025521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.492 [2024-12-06 17:57:44.025881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.492 [2024-12-06 17:57:44.025887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf00880 is same with the state(6) to be set 00:20:56.493 [2024-12-06 17:57:44.026796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.026994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.026999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.493 [2024-12-06 17:57:44.027112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.493 [2024-12-06 17:57:44.027119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.494 [2024-12-06 17:57:44.027493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.494 [2024-12-06 17:57:44.027499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.027504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.027511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.027516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.027522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.027528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.027534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.027539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.027545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.027550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.027557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.027562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.027568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf3d290 is same with the state(6) to be set 00:20:56.495 [2024-12-06 17:57:44.028444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:56.495 [2024-12-06 17:57:44.028457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:56.495 [2024-12-06 17:57:44.028891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.495 [2024-12-06 17:57:44.028903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce6b10 with addr=10.0.0.2, port=4420 00:20:56.495 [2024-12-06 17:57:44.028909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6b10 is same with the state(6) to be set 00:20:56.495 [2024-12-06 17:57:44.029230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.495 [2024-12-06 17:57:44.029239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1171960 with addr=10.0.0.2, port=4420 00:20:56.495 [2024-12-06 17:57:44.029247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171960 is same with the state(6) to be set 00:20:56.495 [2024-12-06 17:57:44.029639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:56.495 [2024-12-06 17:57:44.029656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6b10 (9): Bad file descriptor 00:20:56.495 [2024-12-06 17:57:44.029663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1171960 (9): Bad file descriptor 00:20:56.495 [2024-12-06 17:57:44.030045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.495 [2024-12-06 17:57:44.030056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfbca0 with addr=10.0.0.2, port=4420 00:20:56.495 [2024-12-06 17:57:44.030061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbca0 is same with the state(6) to be set 00:20:56.495 [2024-12-06 17:57:44.030067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:56.495 [2024-12-06 17:57:44.030073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:56.495 [2024-12-06 17:57:44.030079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:56.495 [2024-12-06 17:57:44.030084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:56.495 [2024-12-06 17:57:44.030090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:56.495 [2024-12-06 17:57:44.030095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:56.495 [2024-12-06 17:57:44.030103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:56.495 [2024-12-06 17:57:44.030108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:56.495 [2024-12-06 17:57:44.030145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:56.495 [2024-12-06 17:57:44.030157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfbca0 (9): Bad file descriptor 00:20:56.495 [2024-12-06 17:57:44.030345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.495 [2024-12-06 17:57:44.030353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12610 with addr=10.0.0.2, port=4420 00:20:56.495 [2024-12-06 17:57:44.030359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc12610 is same with the state(6) to be set 00:20:56.495 [2024-12-06 17:57:44.030364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:56.495 [2024-12-06 17:57:44.030368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:56.495 [2024-12-06 17:57:44.030373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:56.495 [2024-12-06 17:57:44.030378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:56.495 [2024-12-06 17:57:44.030402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12610 (9): Bad file descriptor 00:20:56.495 [2024-12-06 17:57:44.030427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:56.495 [2024-12-06 17:57:44.030432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:56.495 [2024-12-06 17:57:44.030437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:56.495 [2024-12-06 17:57:44.030441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:56.495 [2024-12-06 17:57:44.034938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116fa30 (9): Bad file descriptor 00:20:56.495 [2024-12-06 17:57:44.035034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.495 [2024-12-06 17:57:44.035135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.495 [2024-12-06 17:57:44.035140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.496 [2024-12-06 17:57:44.035518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.496 [2024-12-06 17:57:44.035524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.035791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.035796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf018b0 is same with the state(6) to be set 00:20:56.497 [2024-12-06 17:57:44.036678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.497 [2024-12-06 17:57:44.036783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.497 [2024-12-06 17:57:44.036790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.036993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.036999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.498 [2024-12-06 17:57:44.037134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.498 [2024-12-06 17:57:44.037139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.037497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.037503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d0b60 is same with the state(6) to be set 00:20:56.499 [2024-12-06 17:57:44.038389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.038398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.499 [2024-12-06 17:57:44.038406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.499 [2024-12-06 17:57:44.038411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.500 [2024-12-06 17:57:44.038766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.500 [2024-12-06 17:57:44.038772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.038990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.038996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.501 [2024-12-06 17:57:44.039148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.501 [2024-12-06 17:57:44.039154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fdbe0 is same with the state(6) to be set 00:20:56.502 [2024-12-06 17:57:44.040027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.502 [2024-12-06 17:57:44.040430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.502 [2024-12-06 17:57:44.040435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.503 [2024-12-06 17:57:44.040806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.503 [2024-12-06 17:57:44.040811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.040817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.040822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.040829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.040834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.040840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1100160 is same with the state(6) to be set 00:20:56.504 [2024-12-06 17:57:44.041728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.041993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.041998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.504 [2024-12-06 17:57:44.042005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.504 [2024-12-06 17:57:44.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.505 [2024-12-06 17:57:44.042361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.505 [2024-12-06 17:57:44.042366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.506 [2024-12-06 17:57:44.042483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.506 [2024-12-06 17:57:44.042488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102730 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.043370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.043384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.043392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.043401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.043496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.043855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.043866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf3430 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.043872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3430 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.044221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.044229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce4960 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.044235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4960 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.044578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.044585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfcdc0 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.044590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfcdc0 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.044945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.044952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1123ab0 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.044957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1123ab0 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.045875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.045885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.045891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.045898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:56.506 [2024-12-06 17:57:44.046122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.046131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116a100 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.046136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116a100 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.046144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3430 (9): Bad file descriptor 00:20:56.506 [2024-12-06 17:57:44.046151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4960 (9): Bad file descriptor 00:20:56.506 [2024-12-06 17:57:44.046157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfcdc0 (9): Bad file descriptor 00:20:56.506 [2024-12-06 17:57:44.046164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1123ab0 (9): Bad file descriptor 00:20:56.506 [2024-12-06 17:57:44.046491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.046500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1171960 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.046506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171960 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.046848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.046855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce6b10 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.046860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6b10 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.047199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.047206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfbca0 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.047211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbca0 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.506 [2024-12-06 17:57:44.047410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12610 with addr=10.0.0.2, port=4420 00:20:56.506 [2024-12-06 17:57:44.047415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc12610 is same with the state(6) to be set 00:20:56.506 [2024-12-06 17:57:44.047421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116a100 (9): Bad file descriptor 00:20:56.506 [2024-12-06 17:57:44.047427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:56.506 [2024-12-06 17:57:44.047432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:56.506 [2024-12-06 17:57:44.047439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:56.506 [2024-12-06 17:57:44.047445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:56.506 [2024-12-06 17:57:44.047451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:56.507 [2024-12-06 17:57:44.047455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:56.507 [2024-12-06 17:57:44.047460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:56.507 [2024-12-06 17:57:44.047465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:56.507 [2024-12-06 17:57:44.047470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:56.507 [2024-12-06 17:57:44.047474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:56.507 [2024-12-06 17:57:44.047479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:56.507 [2024-12-06 17:57:44.047483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:56.507 [2024-12-06 17:57:44.047489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:56.507 [2024-12-06 17:57:44.047493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:56.507 [2024-12-06 17:57:44.047498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:56.507 [2024-12-06 17:57:44.047502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:56.507 [2024-12-06 17:57:44.047558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.507 [2024-12-06 17:57:44.047866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.507 [2024-12-06 17:57:44.047873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.047993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.047998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.508 [2024-12-06 17:57:44.048249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.508 [2024-12-06 17:57:44.048256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.509 [2024-12-06 17:57:44.048261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.509 [2024-12-06 17:57:44.048268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.509 [2024-12-06 17:57:44.048273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.509 [2024-12-06 17:57:44.048279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.509 [2024-12-06 17:57:44.048285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.509 [2024-12-06 17:57:44.048292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.509 [2024-12-06 17:57:44.048297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.509 [2024-12-06 17:57:44.048303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.509 [2024-12-06 17:57:44.048309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.509 [2024-12-06 17:57:44.048315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.509 [2024-12-06 17:57:44.048320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.509 [2024-12-06 17:57:44.048326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11039f0 is same with the state(6) to be set 00:20:56.509 task offset: 32768 on job bdev=Nvme5n1 fails 00:20:56.509 00:20:56.509 Latency(us) 00:20:56.509 [2024-12-06T16:57:44.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme1n1 ended in about 0.87 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme1n1 : 0.87 224.57 14.04 73.33 0.00 213006.70 3904.85 175636.48 00:20:56.509 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme2n1 ended in about 0.88 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme2n1 : 0.88 217.53 13.60 72.51 0.00 215389.65 19005.44 196608.00 00:20:56.509 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme3n1 ended in about 0.88 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme3n1 : 0.88 289.48 18.09 72.37 0.00 169917.27 14527.15 170393.60 00:20:56.509 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme4n1 ended in about 0.89 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme4n1 : 0.89 288.94 18.06 72.23 0.00 167536.47 13161.81 174762.67 00:20:56.509 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme5n1 ended in about 0.86 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme5n1 : 0.86 296.86 18.55 74.22 0.00 160123.69 6171.31 199229.44 00:20:56.509 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme6n1 ended in about 0.89 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme6n1 : 0.89 288.39 18.02 72.10 0.00 162507.09 14090.24 176510.29 00:20:56.509 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme7n1 ended in about 0.86 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme7n1 : 0.86 296.23 18.51 74.06 0.00 155118.51 2717.01 176510.29 00:20:56.509 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme8n1 ended in about 0.89 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme8n1 : 0.89 287.86 17.99 71.96 0.00 157458.43 17694.72 159034.03 00:20:56.509 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme9n1 ended in about 0.90 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme9n1 : 0.90 214.49 13.41 71.50 0.00 194924.16 15073.28 180005.55 00:20:56.509 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:56.509 Job: Nvme10n1 ended in about 0.87 seconds with error 00:20:56.509 Verification LBA range: start 0x0 length 0x400 00:20:56.509 Nvme10n1 : 0.87 219.57 13.72 73.19 0.00 186429.44 13981.01 198355.63 00:20:56.509 [2024-12-06T16:57:44.336Z] =================================================================================================================== 00:20:56.509 [2024-12-06T16:57:44.336Z] Total : 2623.92 163.99 727.47 0.00 176187.14 2717.01 199229.44 00:20:56.509 [2024-12-06 17:57:44.068838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:56.509 [2024-12-06 17:57:44.068883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:56.509 [2024-12-06 17:57:44.068920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1171960 (9): Bad file descriptor 00:20:56.509 [2024-12-06 17:57:44.068931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6b10 (9): Bad file descriptor 00:20:56.509 [2024-12-06 17:57:44.068938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfbca0 (9): Bad file descriptor 00:20:56.509 [2024-12-06 17:57:44.068945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12610 (9): Bad file descriptor 00:20:56.509 [2024-12-06 17:57:44.068952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:56.509 [2024-12-06 17:57:44.068957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:56.509 [2024-12-06 17:57:44.068964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:56.509 [2024-12-06 17:57:44.068971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:56.509 [2024-12-06 17:57:44.069489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.509 [2024-12-06 17:57:44.069507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116fa30 with addr=10.0.0.2, port=4420 00:20:56.509 [2024-12-06 17:57:44.069515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116fa30 is same with the state(6) to be set 00:20:56.509 [2024-12-06 17:57:44.069521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:56.509 [2024-12-06 17:57:44.069526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:56.509 [2024-12-06 17:57:44.069531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:56.509 [2024-12-06 17:57:44.069537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:56.509 [2024-12-06 17:57:44.069543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:56.509 [2024-12-06 17:57:44.069547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:56.509 [2024-12-06 17:57:44.069552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:56.509 [2024-12-06 17:57:44.069557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:56.509 [2024-12-06 17:57:44.069563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:56.509 [2024-12-06 17:57:44.069567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:56.509 [2024-12-06 17:57:44.069572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:56.509 [2024-12-06 17:57:44.069577] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:56.510 [2024-12-06 17:57:44.069582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:56.510 [2024-12-06 17:57:44.069586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:56.510 [2024-12-06 17:57:44.069591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:56.510 [2024-12-06 17:57:44.069595] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:56.510 [2024-12-06 17:57:44.069880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116fa30 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.069917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.069926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.069932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.069938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.069944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.069971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:56.510 [2024-12-06 17:57:44.069976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:56.510 [2024-12-06 17:57:44.069981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:56.510 [2024-12-06 17:57:44.069986] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:56.510 [2024-12-06 17:57:44.070008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.070017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.070024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.070030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:56.510 [2024-12-06 17:57:44.070343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.070354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1123ab0 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.070359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1123ab0 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.070703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.070710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfcdc0 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.070716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfcdc0 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.070904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.070911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce4960 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.070916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce4960 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.071220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.071228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf3430 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.071233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf3430 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.071585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.071592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x116a100 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.071597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116a100 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.071799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.071806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc12610 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.071811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc12610 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.072118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.072125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfbca0 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.072130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbca0 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.072326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.072334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce6b10 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.072339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6b10 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.072687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.510 [2024-12-06 17:57:44.072693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1171960 with addr=10.0.0.2, port=4420 00:20:56.510 [2024-12-06 17:57:44.072698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1171960 is same with the state(6) to be set 00:20:56.510 [2024-12-06 17:57:44.072707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1123ab0 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfcdc0 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce4960 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3430 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x116a100 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc12610 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfbca0 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce6b10 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1171960 (9): Bad file descriptor 00:20:56.510 [2024-12-06 17:57:44.072778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:56.510 [2024-12-06 17:57:44.072783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:56.510 [2024-12-06 17:57:44.072788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:56.510 [2024-12-06 17:57:44.072793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:56.510 [2024-12-06 17:57:44.072798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:56.510 [2024-12-06 17:57:44.072803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:56.510 [2024-12-06 17:57:44.072807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:56.510 [2024-12-06 17:57:44.072812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072904] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:56.511 [2024-12-06 17:57:44.072946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:56.511 [2024-12-06 17:57:44.072951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:56.511 [2024-12-06 17:57:44.072955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:56.511 [2024-12-06 17:57:44.072960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:56.511 17:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3089501 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3089501 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3089501 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.570 rmmod nvme_tcp 00:20:57.570 rmmod nvme_fabrics 00:20:57.570 rmmod nvme_keyring 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3089179 ']' 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3089179 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3089179 ']' 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3089179 00:20:57.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3089179) - No such process 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3089179 is not found' 00:20:57.570 Process with pid 3089179 is not found 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.570 17:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.119 00:21:00.119 real 0m7.428s 00:21:00.119 user 0m18.202s 00:21:00.119 sys 0m1.019s 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:00.119 ************************************ 00:21:00.119 END TEST nvmf_shutdown_tc3 00:21:00.119 ************************************ 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.119 ************************************ 00:21:00.119 START TEST nvmf_shutdown_tc4 00:21:00.119 ************************************ 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:00.119 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.119 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:00.120 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:00.120 Found net devices under 0000:31:00.0: cvl_0_0 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:00.120 Found net devices under 0000:31:00.1: cvl_0_1 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:00.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:21:00.120 00:21:00.120 --- 10.0.0.2 ping statistics --- 00:21:00.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.120 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:21:00.120 00:21:00.120 --- 10.0.0.1 ping statistics --- 00:21:00.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.120 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.120 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3090971 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3090971 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3090971 ']' 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.121 17:57:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:00.121 [2024-12-06 17:57:47.742993] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:00.121 [2024-12-06 17:57:47.743042] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.121 [2024-12-06 17:57:47.814942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.121 [2024-12-06 17:57:47.844652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.121 [2024-12-06 17:57:47.844680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.121 [2024-12-06 17:57:47.844686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.121 [2024-12-06 17:57:47.844691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.121 [2024-12-06 17:57:47.844696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.121 [2024-12-06 17:57:47.845961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.121 [2024-12-06 17:57:47.846130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.121 [2024-12-06 17:57:47.846262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.121 [2024-12-06 17:57:47.846263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.689 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.689 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:00.689 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:00.689 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:00.689 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.950 [2024-12-06 17:57:48.543995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.950 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.950 Malloc1 00:21:00.950 [2024-12-06 17:57:48.625716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.950 Malloc2 00:21:00.950 Malloc3 00:21:00.950 Malloc4 00:21:00.950 Malloc5 00:21:01.209 Malloc6 00:21:01.209 Malloc7 00:21:01.209 Malloc8 00:21:01.209 Malloc9 00:21:01.209 Malloc10 00:21:01.209 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.209 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:01.209 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.209 17:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:01.209 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3091352 00:21:01.209 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:01.209 17:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:01.468 [2024-12-06 17:57:49.044524] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3090971 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3090971 ']' 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3090971 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3090971 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3090971' 00:21:06.757 killing process with pid 3090971 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3090971 00:21:06.757 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3090971 00:21:06.757 [2024-12-06 17:57:54.061916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d2e0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.061960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d2e0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d7b0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d7b0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d7b0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236d7b0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236dc80 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ce10 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ce10 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ce10 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ce10 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.062755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236ce10 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.064078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.064095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.064106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.064112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.757 [2024-12-06 17:57:54.064117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.064122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.064127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.064132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.064137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2359ce0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.066629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e620 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.066648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e620 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.066653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e620 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.066663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e620 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.066668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e620 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.067304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with starting I/O failed: -6 00:21:06.758 the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with Write completed with error (sct=0, sc=8) 00:21:06.758 the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.067345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236eaf0 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.067595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236efc0 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.067610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236efc0 is same with the state(6) to be set 00:21:06.758 starting I/O failed: -6 00:21:06.758 [2024-12-06 17:57:54.067616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236efc0 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.067622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236efc0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236efc0 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.067632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236efc0 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.067825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.068135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with starting I/O failed: -6 00:21:06.758 the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.068155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.068161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.068167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with the state(6) to be set 00:21:06.758 starting I/O failed: -6 00:21:06.758 [2024-12-06 17:57:54.068172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with the state(6) to be set 00:21:06.758 [2024-12-06 17:57:54.068177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 [2024-12-06 17:57:54.068182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236e150 is same with the state(6) to be set 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 starting I/O failed: -6 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.758 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 [2024-12-06 17:57:54.068447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 [2024-12-06 17:57:54.069507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.759 starting I/O failed: -6 00:21:06.759 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 [2024-12-06 17:57:54.070472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.760 NVMe io qpair process completion error 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 [2024-12-06 17:57:54.071279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.760 starting I/O failed: -6 00:21:06.760 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 [2024-12-06 17:57:54.071929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 [2024-12-06 17:57:54.072601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 starting I/O failed: -6 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 [2024-12-06 17:57:54.072739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4730 is same with the state(6) to be set 00:21:06.761 starting I/O failed: -6 00:21:06.761 [2024-12-06 17:57:54.072755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4730 is same with the state(6) to be set 00:21:06.761 Write completed with error (sct=0, sc=8) 00:21:06.761 [2024-12-06 17:57:54.072761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4730 is same with the state(6) to be set 00:21:06.761 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.072766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4730 is same with the state(6) to be set 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 [2024-12-06 17:57:54.072771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4730 is same with the state(6) to be set 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 [2024-12-06 17:57:54.073126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b50d0 is same with the state(6) to be set 00:21:06.762 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.073143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b50d0 is same with Write completed with error (sct=0, sc=8) 00:21:06.762 the state(6) to be set 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.073382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.073398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 [2024-12-06 17:57:54.073404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.073409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 [2024-12-06 17:57:54.073414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with Write completed with error (sct=0, sc=8) 00:21:06.762 the state(6) to be set 00:21:06.762 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.073421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 [2024-12-06 17:57:54.073425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 [2024-12-06 17:57:54.073430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4260 is same with the state(6) to be set 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 [2024-12-06 17:57:54.073766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.762 NVMe io qpair process completion error 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.762 starting I/O failed: -6 00:21:06.762 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 [2024-12-06 17:57:54.074615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 [2024-12-06 17:57:54.075255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.763 starting I/O failed: -6 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 Write completed with error (sct=0, sc=8) 00:21:06.763 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 [2024-12-06 17:57:54.075942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.764 starting I/O failed: -6 00:21:06.764 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 [2024-12-06 17:57:54.077223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.765 NVMe io qpair process completion error 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 [2024-12-06 17:57:54.078170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.765 starting I/O failed: -6 00:21:06.765 starting I/O failed: -6 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 [2024-12-06 17:57:54.078870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.765 Write completed with error (sct=0, sc=8) 00:21:06.765 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 [2024-12-06 17:57:54.079568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.766 Write completed with error (sct=0, sc=8) 00:21:06.766 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 [2024-12-06 17:57:54.081424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.767 NVMe io qpair process completion error 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 [2024-12-06 17:57:54.082387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 [2024-12-06 17:57:54.083036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 starting I/O failed: -6 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.767 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 [2024-12-06 17:57:54.083721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.768 starting I/O failed: -6 00:21:06.768 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 [2024-12-06 17:57:54.085723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.769 NVMe io qpair process completion error 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 [2024-12-06 17:57:54.086584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.769 [2024-12-06 17:57:54.087213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.769 starting I/O failed: -6 00:21:06.769 starting I/O failed: -6 00:21:06.769 starting I/O failed: -6 00:21:06.769 starting I/O failed: -6 00:21:06.769 starting I/O failed: -6 00:21:06.769 starting I/O failed: -6 00:21:06.769 starting I/O failed: -6 00:21:06.769 Write completed with error (sct=0, sc=8) 00:21:06.769 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.770 Write completed with error (sct=0, sc=8) 00:21:06.770 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 [2024-12-06 17:57:54.089248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.771 NVMe io qpair process completion error 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 [2024-12-06 17:57:54.090110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.771 starting I/O failed: -6 00:21:06.771 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 [2024-12-06 17:57:54.090688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 [2024-12-06 17:57:54.091414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.772 starting I/O failed: -6 00:21:06.772 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 [2024-12-06 17:57:54.092691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.773 NVMe io qpair process completion error 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.773 Write completed with error (sct=0, sc=8) 00:21:06.773 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.774 Write completed with error (sct=0, sc=8) 00:21:06.774 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 [2024-12-06 17:57:54.097948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.775 Write completed with error (sct=0, sc=8) 00:21:06.775 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 [2024-12-06 17:57:54.098637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 [2024-12-06 17:57:54.099320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.776 Write completed with error (sct=0, sc=8) 00:21:06.776 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 [2024-12-06 17:57:54.100818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.777 NVMe io qpair process completion error 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 [2024-12-06 17:57:54.101733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:06.777 starting I/O failed: -6 00:21:06.777 starting I/O failed: -6 00:21:06.777 starting I/O failed: -6 00:21:06.777 starting I/O failed: -6 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 Write completed with error (sct=0, sc=8) 00:21:06.777 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 [2024-12-06 17:57:54.103080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.778 Write completed with error (sct=0, sc=8) 00:21:06.778 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 Write completed with error (sct=0, sc=8) 00:21:06.779 starting I/O failed: -6 00:21:06.779 [2024-12-06 17:57:54.104423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:06.779 NVMe io qpair process completion error 00:21:06.779 Initializing NVMe Controllers 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:06.779 Controller IO queue size 128, less than required. 00:21:06.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:06.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:06.779 Initialization complete. Launching workers. 00:21:06.779 ======================================================== 00:21:06.779 Latency(us) 00:21:06.779 Device Information : IOPS MiB/s Average min max 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2661.16 114.35 48108.97 395.89 95806.36 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2603.64 111.87 48800.00 482.23 103468.06 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2582.22 110.95 49212.98 598.63 83804.60 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2567.74 110.33 49500.31 674.94 83903.84 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2581.59 110.93 49248.23 442.30 83361.25 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2592.72 111.41 49054.46 433.38 86869.93 00:21:06.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2539.60 109.12 50100.52 484.04 89334.32 00:21:06.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2552.20 109.66 49863.34 596.94 90534.56 00:21:06.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2594.40 111.48 49057.82 645.93 82131.27 00:21:06.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2521.34 108.34 50506.98 678.29 82736.18 00:21:06.780 ======================================================== 00:21:06.780 Total : 25796.60 1108.45 49336.19 395.89 103468.06 00:21:06.780 00:21:06.780 [2024-12-06 17:57:54.108203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe089e0 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe07060 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe079f0 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe086b0 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe08050 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe09540 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe076c0 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe08380 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe09360 is same with the state(6) to be set 00:21:06.780 [2024-12-06 17:57:54.108423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe07390 is same with the state(6) to be set 00:21:06.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:06.780 17:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3091352 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3091352 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3091352 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.762 rmmod nvme_tcp 00:21:07.762 rmmod nvme_fabrics 00:21:07.762 rmmod nvme_keyring 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3090971 ']' 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3090971 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3090971 ']' 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3090971 00:21:07.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3090971) - No such process 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3090971 is not found' 00:21:07.762 Process with pid 3090971 is not found 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.762 17:57:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.665 00:21:09.665 real 0m9.989s 00:21:09.665 user 0m27.282s 00:21:09.665 sys 0m3.887s 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:09.665 ************************************ 00:21:09.665 END TEST nvmf_shutdown_tc4 00:21:09.665 ************************************ 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:09.665 00:21:09.665 real 0m38.732s 00:21:09.665 user 1m36.976s 00:21:09.665 sys 0m11.274s 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:09.665 ************************************ 00:21:09.665 END TEST nvmf_shutdown 00:21:09.665 ************************************ 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.665 17:57:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:09.926 ************************************ 00:21:09.926 START TEST nvmf_nsid 00:21:09.926 ************************************ 00:21:09.926 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:09.926 * Looking for test storage... 00:21:09.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:09.926 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.927 --rc genhtml_branch_coverage=1 00:21:09.927 --rc genhtml_function_coverage=1 00:21:09.927 --rc genhtml_legend=1 00:21:09.927 --rc geninfo_all_blocks=1 00:21:09.927 --rc geninfo_unexecuted_blocks=1 00:21:09.927 00:21:09.927 ' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.927 --rc genhtml_branch_coverage=1 00:21:09.927 --rc genhtml_function_coverage=1 00:21:09.927 --rc genhtml_legend=1 00:21:09.927 --rc geninfo_all_blocks=1 00:21:09.927 --rc geninfo_unexecuted_blocks=1 00:21:09.927 00:21:09.927 ' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.927 --rc genhtml_branch_coverage=1 00:21:09.927 --rc genhtml_function_coverage=1 00:21:09.927 --rc genhtml_legend=1 00:21:09.927 --rc geninfo_all_blocks=1 00:21:09.927 --rc geninfo_unexecuted_blocks=1 00:21:09.927 00:21:09.927 ' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:09.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.927 --rc genhtml_branch_coverage=1 00:21:09.927 --rc genhtml_function_coverage=1 00:21:09.927 --rc genhtml_legend=1 00:21:09.927 --rc geninfo_all_blocks=1 00:21:09.927 --rc geninfo_unexecuted_blocks=1 00:21:09.927 00:21:09.927 ' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.927 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.928 17:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.206 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:15.207 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:15.207 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:15.207 Found net devices under 0000:31:00.0: cvl_0_0 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:15.207 Found net devices under 0000:31:00.1: cvl_0_1 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:21:15.207 00:21:15.207 --- 10.0.0.2 ping statistics --- 00:21:15.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.207 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:21:15.207 00:21:15.207 --- 10.0.0.1 ping statistics --- 00:21:15.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.207 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.207 17:58:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3097042 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3097042 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3097042 ']' 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:15.207 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:15.467 [2024-12-06 17:58:03.061669] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:15.467 [2024-12-06 17:58:03.061718] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.467 [2024-12-06 17:58:03.146430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.467 [2024-12-06 17:58:03.182962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.467 [2024-12-06 17:58:03.182995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.467 [2024-12-06 17:58:03.183003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.467 [2024-12-06 17:58:03.183010] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.467 [2024-12-06 17:58:03.183016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.467 [2024-12-06 17:58:03.183599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.037 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.037 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:16.037 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.037 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.037 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3097381 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=1755ec7d-cca7-4b76-9f23-b40447f08391 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=c125249d-3e2c-4f80-9fcb-bf2ff6c0b895 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f3529067-5453-45a7-971d-46abd3f27f74 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:16.296 null0 00:21:16.296 null1 00:21:16.296 null2 00:21:16.296 [2024-12-06 17:58:03.934130] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:16.296 [2024-12-06 17:58:03.934201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097381 ] 00:21:16.296 [2024-12-06 17:58:03.934805] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.296 [2024-12-06 17:58:03.959041] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3097381 /var/tmp/tgt2.sock 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3097381 ']' 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:16.296 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.297 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:16.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:16.297 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.297 17:58:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:16.297 [2024-12-06 17:58:04.020063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.297 [2024-12-06 17:58:04.072347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.554 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.554 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:16.554 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:16.811 [2024-12-06 17:58:04.552878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.811 [2024-12-06 17:58:04.569013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:16.811 nvme0n1 nvme0n2 00:21:16.811 nvme1n1 00:21:16.811 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:16.811 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:16.811 17:58:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:18.185 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:18.185 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:18.185 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:18.185 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:18.185 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:18.445 17:58:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 1755ec7d-cca7-4b76-9f23-b40447f08391 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1755ec7dcca74b769f23b40447f08391 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1755EC7DCCA74B769F23B40447F08391 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 1755EC7DCCA74B769F23B40447F08391 == \1\7\5\5\E\C\7\D\C\C\A\7\4\B\7\6\9\F\2\3\B\4\0\4\4\7\F\0\8\3\9\1 ]] 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid c125249d-3e2c-4f80-9fcb-bf2ff6c0b895 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c125249d3e2c4f809fcbbf2ff6c0b895 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C125249D3E2C4F809FCBBF2FF6C0B895 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ C125249D3E2C4F809FCBBF2FF6C0B895 == \C\1\2\5\2\4\9\D\3\E\2\C\4\F\8\0\9\F\C\B\B\F\2\F\F\6\C\0\B\8\9\5 ]] 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f3529067-5453-45a7-971d-46abd3f27f74 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f3529067545345a7971d46abd3f27f74 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F3529067545345A7971D46ABD3F27F74 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F3529067545345A7971D46ABD3F27F74 == \F\3\5\2\9\0\6\7\5\4\5\3\4\5\A\7\9\7\1\D\4\6\A\B\D\3\F\2\7\F\7\4 ]] 00:21:19.383 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3097381 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3097381 ']' 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3097381 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3097381 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3097381' 00:21:19.643 killing process with pid 3097381 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3097381 00:21:19.643 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3097381 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.903 rmmod nvme_tcp 00:21:19.903 rmmod nvme_fabrics 00:21:19.903 rmmod nvme_keyring 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3097042 ']' 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3097042 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3097042 ']' 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3097042 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3097042 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3097042' 00:21:19.903 killing process with pid 3097042 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3097042 00:21:19.903 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3097042 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.162 17:58:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.070 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:22.070 00:21:22.070 real 0m12.333s 00:21:22.070 user 0m9.956s 00:21:22.070 sys 0m5.007s 00:21:22.070 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.070 17:58:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:22.070 ************************************ 00:21:22.070 END TEST nvmf_nsid 00:21:22.070 ************************************ 00:21:22.070 17:58:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:22.070 00:21:22.070 real 11m34.637s 00:21:22.070 user 25m14.347s 00:21:22.070 sys 3m10.291s 00:21:22.070 17:58:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.070 17:58:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:22.070 ************************************ 00:21:22.070 END TEST nvmf_target_extra 00:21:22.070 ************************************ 00:21:22.070 17:58:09 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:22.070 17:58:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.070 17:58:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.070 17:58:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.330 ************************************ 00:21:22.330 START TEST nvmf_host 00:21:22.330 ************************************ 00:21:22.330 17:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:22.330 * Looking for test storage... 00:21:22.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:22.330 17:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:22.330 17:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:22.330 17:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.330 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:22.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.330 --rc genhtml_branch_coverage=1 00:21:22.330 --rc genhtml_function_coverage=1 00:21:22.330 --rc genhtml_legend=1 00:21:22.330 --rc geninfo_all_blocks=1 00:21:22.330 --rc geninfo_unexecuted_blocks=1 00:21:22.330 00:21:22.331 ' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.331 --rc genhtml_branch_coverage=1 00:21:22.331 --rc genhtml_function_coverage=1 00:21:22.331 --rc genhtml_legend=1 00:21:22.331 --rc geninfo_all_blocks=1 00:21:22.331 --rc geninfo_unexecuted_blocks=1 00:21:22.331 00:21:22.331 ' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.331 --rc genhtml_branch_coverage=1 00:21:22.331 --rc genhtml_function_coverage=1 00:21:22.331 --rc genhtml_legend=1 00:21:22.331 --rc geninfo_all_blocks=1 00:21:22.331 --rc geninfo_unexecuted_blocks=1 00:21:22.331 00:21:22.331 ' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:22.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.331 --rc genhtml_branch_coverage=1 00:21:22.331 --rc genhtml_function_coverage=1 00:21:22.331 --rc genhtml_legend=1 00:21:22.331 --rc geninfo_all_blocks=1 00:21:22.331 --rc geninfo_unexecuted_blocks=1 00:21:22.331 00:21:22.331 ' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.331 ************************************ 00:21:22.331 START TEST nvmf_multicontroller 00:21:22.331 ************************************ 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:22.331 * Looking for test storage... 00:21:22.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:22.331 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:22.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.592 --rc genhtml_branch_coverage=1 00:21:22.592 --rc genhtml_function_coverage=1 00:21:22.592 --rc genhtml_legend=1 00:21:22.592 --rc geninfo_all_blocks=1 00:21:22.592 --rc geninfo_unexecuted_blocks=1 00:21:22.592 00:21:22.592 ' 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:22.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.592 --rc genhtml_branch_coverage=1 00:21:22.592 --rc genhtml_function_coverage=1 00:21:22.592 --rc genhtml_legend=1 00:21:22.592 --rc geninfo_all_blocks=1 00:21:22.592 --rc geninfo_unexecuted_blocks=1 00:21:22.592 00:21:22.592 ' 00:21:22.592 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:22.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.592 --rc genhtml_branch_coverage=1 00:21:22.592 --rc genhtml_function_coverage=1 00:21:22.592 --rc genhtml_legend=1 00:21:22.593 --rc geninfo_all_blocks=1 00:21:22.593 --rc geninfo_unexecuted_blocks=1 00:21:22.593 00:21:22.593 ' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:22.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.593 --rc genhtml_branch_coverage=1 00:21:22.593 --rc genhtml_function_coverage=1 00:21:22.593 --rc genhtml_legend=1 00:21:22.593 --rc geninfo_all_blocks=1 00:21:22.593 --rc geninfo_unexecuted_blocks=1 00:21:22.593 00:21:22.593 ' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.593 17:58:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:27.874 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:27.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.874 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:27.875 Found net devices under 0000:31:00.0: cvl_0_0 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:27.875 Found net devices under 0000:31:00.1: cvl_0_1 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:27.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:21:27.875 00:21:27.875 --- 10.0.0.2 ping statistics --- 00:21:27.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.875 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:21:27.875 00:21:27.875 --- 10.0.0.1 ping statistics --- 00:21:27.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.875 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3102568 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3102568 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3102568 ']' 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:27.875 17:58:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.875 [2024-12-06 17:58:15.658036] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:27.875 [2024-12-06 17:58:15.658109] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.134 [2024-12-06 17:58:15.739658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.134 [2024-12-06 17:58:15.778680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.134 [2024-12-06 17:58:15.778722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.134 [2024-12-06 17:58:15.778728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.134 [2024-12-06 17:58:15.778732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.134 [2024-12-06 17:58:15.778737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.134 [2024-12-06 17:58:15.780371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.134 [2024-12-06 17:58:15.780522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.134 [2024-12-06 17:58:15.780524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.702 [2024-12-06 17:58:16.485396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.702 Malloc0 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.702 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.960 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.960 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.960 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.960 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 [2024-12-06 17:58:16.533095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 [2024-12-06 17:58:16.541014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 Malloc1 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3102872 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3102872 /var/tmp/bdevperf.sock 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3102872 ']' 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:28.961 17:58:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.901 NVMe0n1 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.901 1 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.901 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.902 request: 00:21:29.902 { 00:21:29.902 "name": "NVMe0", 00:21:29.902 "trtype": "tcp", 00:21:29.902 "traddr": "10.0.0.2", 00:21:29.902 "adrfam": "ipv4", 00:21:29.902 "trsvcid": "4420", 00:21:29.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.902 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:29.902 "hostaddr": "10.0.0.1", 00:21:29.902 "prchk_reftag": false, 00:21:29.902 "prchk_guard": false, 00:21:29.902 "hdgst": false, 00:21:29.902 "ddgst": false, 00:21:29.902 "allow_unrecognized_csi": false, 00:21:29.902 "method": "bdev_nvme_attach_controller", 00:21:29.902 "req_id": 1 00:21:29.902 } 00:21:29.902 Got JSON-RPC error response 00:21:29.902 response: 00:21:29.902 { 00:21:29.902 "code": -114, 00:21:29.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:29.902 } 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.902 request: 00:21:29.902 { 00:21:29.902 "name": "NVMe0", 00:21:29.902 "trtype": "tcp", 00:21:29.902 "traddr": "10.0.0.2", 00:21:29.902 "adrfam": "ipv4", 00:21:29.902 "trsvcid": "4420", 00:21:29.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.902 "hostaddr": "10.0.0.1", 00:21:29.902 "prchk_reftag": false, 00:21:29.902 "prchk_guard": false, 00:21:29.902 "hdgst": false, 00:21:29.902 "ddgst": false, 00:21:29.902 "allow_unrecognized_csi": false, 00:21:29.902 "method": "bdev_nvme_attach_controller", 00:21:29.902 "req_id": 1 00:21:29.902 } 00:21:29.902 Got JSON-RPC error response 00:21:29.902 response: 00:21:29.902 { 00:21:29.902 "code": -114, 00:21:29.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:29.902 } 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.902 request: 00:21:29.902 { 00:21:29.902 "name": "NVMe0", 00:21:29.902 "trtype": "tcp", 00:21:29.902 "traddr": "10.0.0.2", 00:21:29.902 "adrfam": "ipv4", 00:21:29.902 "trsvcid": "4420", 00:21:29.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.902 "hostaddr": "10.0.0.1", 00:21:29.902 "prchk_reftag": false, 00:21:29.902 "prchk_guard": false, 00:21:29.902 "hdgst": false, 00:21:29.902 "ddgst": false, 00:21:29.902 "multipath": "disable", 00:21:29.902 "allow_unrecognized_csi": false, 00:21:29.902 "method": "bdev_nvme_attach_controller", 00:21:29.902 "req_id": 1 00:21:29.902 } 00:21:29.902 Got JSON-RPC error response 00:21:29.902 response: 00:21:29.902 { 00:21:29.902 "code": -114, 00:21:29.902 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:29.902 } 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.902 request: 00:21:29.902 { 00:21:29.902 "name": "NVMe0", 00:21:29.902 "trtype": "tcp", 00:21:29.902 "traddr": "10.0.0.2", 00:21:29.902 "adrfam": "ipv4", 00:21:29.902 "trsvcid": "4420", 00:21:29.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.902 "hostaddr": "10.0.0.1", 00:21:29.902 "prchk_reftag": false, 00:21:29.902 "prchk_guard": false, 00:21:29.902 "hdgst": false, 00:21:29.902 "ddgst": false, 00:21:29.902 "multipath": "failover", 00:21:29.902 "allow_unrecognized_csi": false, 00:21:29.902 "method": "bdev_nvme_attach_controller", 00:21:29.902 "req_id": 1 00:21:29.902 } 00:21:29.902 Got JSON-RPC error response 00:21:29.902 response: 00:21:29.902 { 00:21:29.902 "code": -114, 00:21:29.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:29.902 } 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.902 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.163 NVMe0n1 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.163 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.163 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.424 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.424 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:30.424 17:58:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.364 { 00:21:31.364 "results": [ 00:21:31.364 { 00:21:31.364 "job": "NVMe0n1", 00:21:31.364 "core_mask": "0x1", 00:21:31.364 "workload": "write", 00:21:31.364 "status": "finished", 00:21:31.364 "queue_depth": 128, 00:21:31.364 "io_size": 4096, 00:21:31.364 "runtime": 1.006776, 00:21:31.364 "iops": 28869.381073843637, 00:21:31.364 "mibps": 112.7710198197017, 00:21:31.364 "io_failed": 0, 00:21:31.364 "io_timeout": 0, 00:21:31.364 "avg_latency_us": 4423.812235105223, 00:21:31.364 "min_latency_us": 2116.266666666667, 00:21:31.364 "max_latency_us": 16056.32 00:21:31.364 } 00:21:31.364 ], 00:21:31.364 "core_count": 1 00:21:31.364 } 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3102872 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3102872 ']' 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3102872 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3102872 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3102872' 00:21:31.364 killing process with pid 3102872 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3102872 00:21:31.364 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3102872 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:31.624 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:31.624 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:31.624 [2024-12-06 17:58:16.624193] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:31.624 [2024-12-06 17:58:16.624252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3102872 ] 00:21:31.624 [2024-12-06 17:58:16.702573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.624 [2024-12-06 17:58:16.738661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.624 [2024-12-06 17:58:17.973849] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name a7d82d09-8271-4cd2-970f-fd8f9f39be63 already exists 00:21:31.624 [2024-12-06 17:58:17.973879] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:a7d82d09-8271-4cd2-970f-fd8f9f39be63 alias for bdev NVMe1n1 00:21:31.624 [2024-12-06 17:58:17.973888] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:31.624 Running I/O for 1 seconds... 00:21:31.624 28843.00 IOPS, 112.67 MiB/s 00:21:31.624 Latency(us) 00:21:31.624 [2024-12-06T16:58:19.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.624 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:31.624 NVMe0n1 : 1.01 28869.38 112.77 0.00 0.00 4423.81 2116.27 16056.32 00:21:31.624 [2024-12-06T16:58:19.451Z] =================================================================================================================== 00:21:31.624 [2024-12-06T16:58:19.451Z] Total : 28869.38 112.77 0.00 0.00 4423.81 2116.27 16056.32 00:21:31.624 Received shutdown signal, test time was about 1.000000 seconds 00:21:31.624 00:21:31.624 Latency(us) 00:21:31.624 [2024-12-06T16:58:19.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.624 [2024-12-06T16:58:19.451Z] =================================================================================================================== 00:21:31.624 [2024-12-06T16:58:19.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.624 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.625 rmmod nvme_tcp 00:21:31.625 rmmod nvme_fabrics 00:21:31.625 rmmod nvme_keyring 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3102568 ']' 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3102568 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3102568 ']' 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3102568 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3102568 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3102568' 00:21:31.625 killing process with pid 3102568 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3102568 00:21:31.625 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3102568 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.885 17:58:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.794 00:21:33.794 real 0m11.511s 00:21:33.794 user 0m15.639s 00:21:33.794 sys 0m4.739s 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.794 ************************************ 00:21:33.794 END TEST nvmf_multicontroller 00:21:33.794 ************************************ 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.794 17:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.055 ************************************ 00:21:34.055 START TEST nvmf_aer 00:21:34.055 ************************************ 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:34.055 * Looking for test storage... 00:21:34.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.055 --rc genhtml_branch_coverage=1 00:21:34.055 --rc genhtml_function_coverage=1 00:21:34.055 --rc genhtml_legend=1 00:21:34.055 --rc geninfo_all_blocks=1 00:21:34.055 --rc geninfo_unexecuted_blocks=1 00:21:34.055 00:21:34.055 ' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.055 --rc genhtml_branch_coverage=1 00:21:34.055 --rc genhtml_function_coverage=1 00:21:34.055 --rc genhtml_legend=1 00:21:34.055 --rc geninfo_all_blocks=1 00:21:34.055 --rc geninfo_unexecuted_blocks=1 00:21:34.055 00:21:34.055 ' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.055 --rc genhtml_branch_coverage=1 00:21:34.055 --rc genhtml_function_coverage=1 00:21:34.055 --rc genhtml_legend=1 00:21:34.055 --rc geninfo_all_blocks=1 00:21:34.055 --rc geninfo_unexecuted_blocks=1 00:21:34.055 00:21:34.055 ' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.055 --rc genhtml_branch_coverage=1 00:21:34.055 --rc genhtml_function_coverage=1 00:21:34.055 --rc genhtml_legend=1 00:21:34.055 --rc geninfo_all_blocks=1 00:21:34.055 --rc geninfo_unexecuted_blocks=1 00:21:34.055 00:21:34.055 ' 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.055 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.056 17:58:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.332 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:39.332 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:39.332 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:39.332 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:39.333 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:39.333 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:39.333 Found net devices under 0000:31:00.0: cvl_0_0 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:39.333 Found net devices under 0000:31:00.1: cvl_0_1 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.333 17:58:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.333 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.333 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.333 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:39.333 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:39.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:21:39.594 00:21:39.594 --- 10.0.0.2 ping statistics --- 00:21:39.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.594 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:21:39.594 00:21:39.594 --- 10.0.0.1 ping statistics --- 00:21:39.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.594 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:39.594 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3107886 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3107886 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3107886 ']' 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.595 17:58:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.595 [2024-12-06 17:58:27.318801] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:39.595 [2024-12-06 17:58:27.318866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.595 [2024-12-06 17:58:27.410580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.855 [2024-12-06 17:58:27.465650] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.855 [2024-12-06 17:58:27.465703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.855 [2024-12-06 17:58:27.465712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.855 [2024-12-06 17:58:27.465720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.855 [2024-12-06 17:58:27.465727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.855 [2024-12-06 17:58:27.468091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.855 [2024-12-06 17:58:27.468255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.855 [2024-12-06 17:58:27.468464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.855 [2024-12-06 17:58:27.468464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.501 [2024-12-06 17:58:28.142761] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.501 Malloc0 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.501 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.502 [2024-12-06 17:58:28.195868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.502 [ 00:21:40.502 { 00:21:40.502 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:40.502 "subtype": "Discovery", 00:21:40.502 "listen_addresses": [], 00:21:40.502 "allow_any_host": true, 00:21:40.502 "hosts": [] 00:21:40.502 }, 00:21:40.502 { 00:21:40.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.502 "subtype": "NVMe", 00:21:40.502 "listen_addresses": [ 00:21:40.502 { 00:21:40.502 "trtype": "TCP", 00:21:40.502 "adrfam": "IPv4", 00:21:40.502 "traddr": "10.0.0.2", 00:21:40.502 "trsvcid": "4420" 00:21:40.502 } 00:21:40.502 ], 00:21:40.502 "allow_any_host": true, 00:21:40.502 "hosts": [], 00:21:40.502 "serial_number": "SPDK00000000000001", 00:21:40.502 "model_number": "SPDK bdev Controller", 00:21:40.502 "max_namespaces": 2, 00:21:40.502 "min_cntlid": 1, 00:21:40.502 "max_cntlid": 65519, 00:21:40.502 "namespaces": [ 00:21:40.502 { 00:21:40.502 "nsid": 1, 00:21:40.502 "bdev_name": "Malloc0", 00:21:40.502 "name": "Malloc0", 00:21:40.502 "nguid": "CFF821629653467F80F7FB12A95F40CF", 00:21:40.502 "uuid": "cff82162-9653-467f-80f7-fb12a95f40cf" 00:21:40.502 } 00:21:40.502 ] 00:21:40.502 } 00:21:40.502 ] 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3108092 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:40.502 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.760 Malloc1 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.760 Asynchronous Event Request test 00:21:40.760 Attaching to 10.0.0.2 00:21:40.760 Attached to 10.0.0.2 00:21:40.760 Registering asynchronous event callbacks... 00:21:40.760 Starting namespace attribute notice tests for all controllers... 00:21:40.760 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:40.760 aer_cb - Changed Namespace 00:21:40.760 Cleaning up... 00:21:40.760 [ 00:21:40.760 { 00:21:40.760 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:40.760 "subtype": "Discovery", 00:21:40.760 "listen_addresses": [], 00:21:40.760 "allow_any_host": true, 00:21:40.760 "hosts": [] 00:21:40.760 }, 00:21:40.760 { 00:21:40.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.760 "subtype": "NVMe", 00:21:40.760 "listen_addresses": [ 00:21:40.760 { 00:21:40.760 "trtype": "TCP", 00:21:40.760 "adrfam": "IPv4", 00:21:40.760 "traddr": "10.0.0.2", 00:21:40.760 "trsvcid": "4420" 00:21:40.760 } 00:21:40.760 ], 00:21:40.760 "allow_any_host": true, 00:21:40.760 "hosts": [], 00:21:40.760 "serial_number": "SPDK00000000000001", 00:21:40.760 "model_number": "SPDK bdev Controller", 00:21:40.760 "max_namespaces": 2, 00:21:40.760 "min_cntlid": 1, 00:21:40.760 "max_cntlid": 65519, 00:21:40.760 "namespaces": [ 00:21:40.760 { 00:21:40.760 "nsid": 1, 00:21:40.760 "bdev_name": "Malloc0", 00:21:40.760 "name": "Malloc0", 00:21:40.760 "nguid": "CFF821629653467F80F7FB12A95F40CF", 00:21:40.760 "uuid": "cff82162-9653-467f-80f7-fb12a95f40cf" 00:21:40.760 }, 00:21:40.760 { 00:21:40.760 "nsid": 2, 00:21:40.760 "bdev_name": "Malloc1", 00:21:40.760 "name": "Malloc1", 00:21:40.760 "nguid": "6F66716CA9B041EABBEC3C83BA13B084", 00:21:40.760 "uuid": "6f66716c-a9b0-41ea-bbec-3c83ba13b084" 00:21:40.760 } 00:21:40.760 ] 00:21:40.760 } 00:21:40.760 ] 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3108092 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.760 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.018 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.019 rmmod nvme_tcp 00:21:41.019 rmmod nvme_fabrics 00:21:41.019 rmmod nvme_keyring 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3107886 ']' 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3107886 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3107886 ']' 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3107886 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3107886 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3107886' 00:21:41.019 killing process with pid 3107886 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3107886 00:21:41.019 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3107886 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.278 17:58:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:43.184 00:21:43.184 real 0m9.278s 00:21:43.184 user 0m7.055s 00:21:43.184 sys 0m4.605s 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:43.184 ************************************ 00:21:43.184 END TEST nvmf_aer 00:21:43.184 ************************************ 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.184 ************************************ 00:21:43.184 START TEST nvmf_async_init 00:21:43.184 ************************************ 00:21:43.184 17:58:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:43.445 * Looking for test storage... 00:21:43.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:43.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.445 --rc genhtml_branch_coverage=1 00:21:43.445 --rc genhtml_function_coverage=1 00:21:43.445 --rc genhtml_legend=1 00:21:43.445 --rc geninfo_all_blocks=1 00:21:43.445 --rc geninfo_unexecuted_blocks=1 00:21:43.445 00:21:43.445 ' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:43.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.445 --rc genhtml_branch_coverage=1 00:21:43.445 --rc genhtml_function_coverage=1 00:21:43.445 --rc genhtml_legend=1 00:21:43.445 --rc geninfo_all_blocks=1 00:21:43.445 --rc geninfo_unexecuted_blocks=1 00:21:43.445 00:21:43.445 ' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:43.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.445 --rc genhtml_branch_coverage=1 00:21:43.445 --rc genhtml_function_coverage=1 00:21:43.445 --rc genhtml_legend=1 00:21:43.445 --rc geninfo_all_blocks=1 00:21:43.445 --rc geninfo_unexecuted_blocks=1 00:21:43.445 00:21:43.445 ' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:43.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.445 --rc genhtml_branch_coverage=1 00:21:43.445 --rc genhtml_function_coverage=1 00:21:43.445 --rc genhtml_legend=1 00:21:43.445 --rc geninfo_all_blocks=1 00:21:43.445 --rc geninfo_unexecuted_blocks=1 00:21:43.445 00:21:43.445 ' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.445 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4b1e71ec713d41a1820487a5b680232f 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:43.446 17:58:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.713 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:48.714 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:48.714 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:48.714 Found net devices under 0000:31:00.0: cvl_0_0 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:48.714 Found net devices under 0000:31:00.1: cvl_0_1 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.714 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:21:48.715 00:21:48.715 --- 10.0.0.2 ping statistics --- 00:21:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.715 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:21:48.715 00:21:48.715 --- 10.0.0.1 ping statistics --- 00:21:48.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.715 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.715 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3112598 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3112598 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3112598 ']' 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.986 17:58:36 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.986 [2024-12-06 17:58:36.604988] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:48.986 [2024-12-06 17:58:36.605051] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.986 [2024-12-06 17:58:36.695634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.986 [2024-12-06 17:58:36.743072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.986 [2024-12-06 17:58:36.743123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.986 [2024-12-06 17:58:36.743132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.986 [2024-12-06 17:58:36.743139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.986 [2024-12-06 17:58:36.743145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.986 [2024-12-06 17:58:36.743842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 [2024-12-06 17:58:37.424154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 null0 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4b1e71ec713d41a1820487a5b680232f 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 [2024-12-06 17:58:37.464369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 nvme0n1 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 [ 00:21:49.921 { 00:21:49.921 "name": "nvme0n1", 00:21:49.921 "aliases": [ 00:21:49.921 "4b1e71ec-713d-41a1-8204-87a5b680232f" 00:21:49.921 ], 00:21:49.921 "product_name": "NVMe disk", 00:21:49.921 "block_size": 512, 00:21:49.921 "num_blocks": 2097152, 00:21:49.921 "uuid": "4b1e71ec-713d-41a1-8204-87a5b680232f", 00:21:49.921 "numa_id": 0, 00:21:49.921 "assigned_rate_limits": { 00:21:49.921 "rw_ios_per_sec": 0, 00:21:49.921 "rw_mbytes_per_sec": 0, 00:21:49.921 "r_mbytes_per_sec": 0, 00:21:49.921 "w_mbytes_per_sec": 0 00:21:49.921 }, 00:21:49.921 "claimed": false, 00:21:49.921 "zoned": false, 00:21:49.921 "supported_io_types": { 00:21:49.921 "read": true, 00:21:49.921 "write": true, 00:21:49.921 "unmap": false, 00:21:49.921 "flush": true, 00:21:49.921 "reset": true, 00:21:49.921 "nvme_admin": true, 00:21:49.921 "nvme_io": true, 00:21:49.921 "nvme_io_md": false, 00:21:49.921 "write_zeroes": true, 00:21:49.921 "zcopy": false, 00:21:49.921 "get_zone_info": false, 00:21:49.921 "zone_management": false, 00:21:49.921 "zone_append": false, 00:21:49.921 "compare": true, 00:21:49.921 "compare_and_write": true, 00:21:49.921 "abort": true, 00:21:49.921 "seek_hole": false, 00:21:49.921 "seek_data": false, 00:21:49.921 "copy": true, 00:21:49.921 "nvme_iov_md": false 00:21:49.921 }, 00:21:49.921 "memory_domains": [ 00:21:49.921 { 00:21:49.921 "dma_device_id": "system", 00:21:49.921 "dma_device_type": 1 00:21:49.921 } 00:21:49.921 ], 00:21:49.921 "driver_specific": { 00:21:49.921 "nvme": [ 00:21:49.921 { 00:21:49.921 "trid": { 00:21:49.921 "trtype": "TCP", 00:21:49.921 "adrfam": "IPv4", 00:21:49.921 "traddr": "10.0.0.2", 00:21:49.921 "trsvcid": "4420", 00:21:49.921 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.921 }, 00:21:49.921 "ctrlr_data": { 00:21:49.921 "cntlid": 1, 00:21:49.921 "vendor_id": "0x8086", 00:21:49.921 "model_number": "SPDK bdev Controller", 00:21:49.921 "serial_number": "00000000000000000000", 00:21:49.921 "firmware_revision": "25.01", 00:21:49.921 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.921 "oacs": { 00:21:49.921 "security": 0, 00:21:49.921 "format": 0, 00:21:49.921 "firmware": 0, 00:21:49.921 "ns_manage": 0 00:21:49.921 }, 00:21:49.921 "multi_ctrlr": true, 00:21:49.921 "ana_reporting": false 00:21:49.921 }, 00:21:49.921 "vs": { 00:21:49.921 "nvme_version": "1.3" 00:21:49.921 }, 00:21:49.921 "ns_data": { 00:21:49.921 "id": 1, 00:21:49.921 "can_share": true 00:21:49.921 } 00:21:49.921 } 00:21:49.921 ], 00:21:49.921 "mp_policy": "active_passive" 00:21:49.921 } 00:21:49.921 } 00:21:49.921 ] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.921 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.921 [2024-12-06 17:58:37.712850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:49.921 [2024-12-06 17:58:37.712921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe45c40 (9): Bad file descriptor 00:21:50.181 [2024-12-06 17:58:37.845200] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 [ 00:21:50.181 { 00:21:50.181 "name": "nvme0n1", 00:21:50.181 "aliases": [ 00:21:50.181 "4b1e71ec-713d-41a1-8204-87a5b680232f" 00:21:50.181 ], 00:21:50.181 "product_name": "NVMe disk", 00:21:50.181 "block_size": 512, 00:21:50.181 "num_blocks": 2097152, 00:21:50.181 "uuid": "4b1e71ec-713d-41a1-8204-87a5b680232f", 00:21:50.181 "numa_id": 0, 00:21:50.181 "assigned_rate_limits": { 00:21:50.181 "rw_ios_per_sec": 0, 00:21:50.181 "rw_mbytes_per_sec": 0, 00:21:50.181 "r_mbytes_per_sec": 0, 00:21:50.181 "w_mbytes_per_sec": 0 00:21:50.181 }, 00:21:50.181 "claimed": false, 00:21:50.181 "zoned": false, 00:21:50.181 "supported_io_types": { 00:21:50.181 "read": true, 00:21:50.181 "write": true, 00:21:50.181 "unmap": false, 00:21:50.181 "flush": true, 00:21:50.181 "reset": true, 00:21:50.181 "nvme_admin": true, 00:21:50.181 "nvme_io": true, 00:21:50.181 "nvme_io_md": false, 00:21:50.181 "write_zeroes": true, 00:21:50.181 "zcopy": false, 00:21:50.181 "get_zone_info": false, 00:21:50.181 "zone_management": false, 00:21:50.181 "zone_append": false, 00:21:50.181 "compare": true, 00:21:50.181 "compare_and_write": true, 00:21:50.181 "abort": true, 00:21:50.181 "seek_hole": false, 00:21:50.181 "seek_data": false, 00:21:50.181 "copy": true, 00:21:50.181 "nvme_iov_md": false 00:21:50.181 }, 00:21:50.181 "memory_domains": [ 00:21:50.181 { 00:21:50.181 "dma_device_id": "system", 00:21:50.181 "dma_device_type": 1 00:21:50.181 } 00:21:50.181 ], 00:21:50.181 "driver_specific": { 00:21:50.181 "nvme": [ 00:21:50.181 { 00:21:50.181 "trid": { 00:21:50.181 "trtype": "TCP", 00:21:50.181 "adrfam": "IPv4", 00:21:50.181 "traddr": "10.0.0.2", 00:21:50.181 "trsvcid": "4420", 00:21:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:50.181 }, 00:21:50.181 "ctrlr_data": { 00:21:50.181 "cntlid": 2, 00:21:50.181 "vendor_id": "0x8086", 00:21:50.181 "model_number": "SPDK bdev Controller", 00:21:50.181 "serial_number": "00000000000000000000", 00:21:50.181 "firmware_revision": "25.01", 00:21:50.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.181 "oacs": { 00:21:50.181 "security": 0, 00:21:50.181 "format": 0, 00:21:50.181 "firmware": 0, 00:21:50.181 "ns_manage": 0 00:21:50.181 }, 00:21:50.181 "multi_ctrlr": true, 00:21:50.181 "ana_reporting": false 00:21:50.181 }, 00:21:50.181 "vs": { 00:21:50.181 "nvme_version": "1.3" 00:21:50.181 }, 00:21:50.181 "ns_data": { 00:21:50.181 "id": 1, 00:21:50.181 "can_share": true 00:21:50.181 } 00:21:50.181 } 00:21:50.181 ], 00:21:50.181 "mp_policy": "active_passive" 00:21:50.181 } 00:21:50.181 } 00:21:50.181 ] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3Baf3RRLOc 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3Baf3RRLOc 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.3Baf3RRLOc 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 [2024-12-06 17:58:37.901475] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.181 [2024-12-06 17:58:37.901588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 [2024-12-06 17:58:37.917530] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.181 nvme0n1 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.181 17:58:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.181 [ 00:21:50.181 { 00:21:50.181 "name": "nvme0n1", 00:21:50.181 "aliases": [ 00:21:50.181 "4b1e71ec-713d-41a1-8204-87a5b680232f" 00:21:50.181 ], 00:21:50.181 "product_name": "NVMe disk", 00:21:50.181 "block_size": 512, 00:21:50.181 "num_blocks": 2097152, 00:21:50.181 "uuid": "4b1e71ec-713d-41a1-8204-87a5b680232f", 00:21:50.181 "numa_id": 0, 00:21:50.181 "assigned_rate_limits": { 00:21:50.181 "rw_ios_per_sec": 0, 00:21:50.181 "rw_mbytes_per_sec": 0, 00:21:50.181 "r_mbytes_per_sec": 0, 00:21:50.181 "w_mbytes_per_sec": 0 00:21:50.181 }, 00:21:50.181 "claimed": false, 00:21:50.181 "zoned": false, 00:21:50.181 "supported_io_types": { 00:21:50.181 "read": true, 00:21:50.181 "write": true, 00:21:50.181 "unmap": false, 00:21:50.181 "flush": true, 00:21:50.181 "reset": true, 00:21:50.181 "nvme_admin": true, 00:21:50.181 "nvme_io": true, 00:21:50.181 "nvme_io_md": false, 00:21:50.181 "write_zeroes": true, 00:21:50.181 "zcopy": false, 00:21:50.181 "get_zone_info": false, 00:21:50.182 "zone_management": false, 00:21:50.182 "zone_append": false, 00:21:50.182 "compare": true, 00:21:50.182 "compare_and_write": true, 00:21:50.182 "abort": true, 00:21:50.182 "seek_hole": false, 00:21:50.182 "seek_data": false, 00:21:50.182 "copy": true, 00:21:50.182 "nvme_iov_md": false 00:21:50.182 }, 00:21:50.182 "memory_domains": [ 00:21:50.182 { 00:21:50.182 "dma_device_id": "system", 00:21:50.182 "dma_device_type": 1 00:21:50.182 } 00:21:50.182 ], 00:21:50.182 "driver_specific": { 00:21:50.182 "nvme": [ 00:21:50.182 { 00:21:50.182 "trid": { 00:21:50.182 "trtype": "TCP", 00:21:50.182 "adrfam": "IPv4", 00:21:50.182 "traddr": "10.0.0.2", 00:21:50.182 "trsvcid": "4421", 00:21:50.182 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:50.182 }, 00:21:50.182 "ctrlr_data": { 00:21:50.182 "cntlid": 3, 00:21:50.182 "vendor_id": "0x8086", 00:21:50.182 "model_number": "SPDK bdev Controller", 00:21:50.182 "serial_number": "00000000000000000000", 00:21:50.182 "firmware_revision": "25.01", 00:21:50.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.182 "oacs": { 00:21:50.182 "security": 0, 00:21:50.182 "format": 0, 00:21:50.182 "firmware": 0, 00:21:50.182 "ns_manage": 0 00:21:50.182 }, 00:21:50.182 "multi_ctrlr": true, 00:21:50.182 "ana_reporting": false 00:21:50.182 }, 00:21:50.182 "vs": { 00:21:50.182 "nvme_version": "1.3" 00:21:50.182 }, 00:21:50.182 "ns_data": { 00:21:50.182 "id": 1, 00:21:50.182 "can_share": true 00:21:50.182 } 00:21:50.182 } 00:21:50.182 ], 00:21:50.182 "mp_policy": "active_passive" 00:21:50.182 } 00:21:50.182 } 00:21:50.182 ] 00:21:50.182 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.182 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:50.182 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.182 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:50.441 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.441 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.3Baf3RRLOc 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.442 rmmod nvme_tcp 00:21:50.442 rmmod nvme_fabrics 00:21:50.442 rmmod nvme_keyring 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3112598 ']' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3112598 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3112598 ']' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3112598 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112598 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112598' 00:21:50.442 killing process with pid 3112598 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3112598 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3112598 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.442 17:58:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.978 00:21:52.978 real 0m9.319s 00:21:52.978 user 0m3.188s 00:21:52.978 sys 0m4.500s 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:52.978 ************************************ 00:21:52.978 END TEST nvmf_async_init 00:21:52.978 ************************************ 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.978 ************************************ 00:21:52.978 START TEST dma 00:21:52.978 ************************************ 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:52.978 * Looking for test storage... 00:21:52.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.978 --rc genhtml_branch_coverage=1 00:21:52.978 --rc genhtml_function_coverage=1 00:21:52.978 --rc genhtml_legend=1 00:21:52.978 --rc geninfo_all_blocks=1 00:21:52.978 --rc geninfo_unexecuted_blocks=1 00:21:52.978 00:21:52.978 ' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.978 --rc genhtml_branch_coverage=1 00:21:52.978 --rc genhtml_function_coverage=1 00:21:52.978 --rc genhtml_legend=1 00:21:52.978 --rc geninfo_all_blocks=1 00:21:52.978 --rc geninfo_unexecuted_blocks=1 00:21:52.978 00:21:52.978 ' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.978 --rc genhtml_branch_coverage=1 00:21:52.978 --rc genhtml_function_coverage=1 00:21:52.978 --rc genhtml_legend=1 00:21:52.978 --rc geninfo_all_blocks=1 00:21:52.978 --rc geninfo_unexecuted_blocks=1 00:21:52.978 00:21:52.978 ' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:52.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.978 --rc genhtml_branch_coverage=1 00:21:52.978 --rc genhtml_function_coverage=1 00:21:52.978 --rc genhtml_legend=1 00:21:52.978 --rc geninfo_all_blocks=1 00:21:52.978 --rc geninfo_unexecuted_blocks=1 00:21:52.978 00:21:52.978 ' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.978 17:58:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:52.979 00:21:52.979 real 0m0.157s 00:21:52.979 user 0m0.080s 00:21:52.979 sys 0m0.085s 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:52.979 ************************************ 00:21:52.979 END TEST dma 00:21:52.979 ************************************ 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.979 ************************************ 00:21:52.979 START TEST nvmf_identify 00:21:52.979 ************************************ 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:52.979 * Looking for test storage... 00:21:52.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.979 --rc genhtml_branch_coverage=1 00:21:52.979 --rc genhtml_function_coverage=1 00:21:52.979 --rc genhtml_legend=1 00:21:52.979 --rc geninfo_all_blocks=1 00:21:52.979 --rc geninfo_unexecuted_blocks=1 00:21:52.979 00:21:52.979 ' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.979 --rc genhtml_branch_coverage=1 00:21:52.979 --rc genhtml_function_coverage=1 00:21:52.979 --rc genhtml_legend=1 00:21:52.979 --rc geninfo_all_blocks=1 00:21:52.979 --rc geninfo_unexecuted_blocks=1 00:21:52.979 00:21:52.979 ' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.979 --rc genhtml_branch_coverage=1 00:21:52.979 --rc genhtml_function_coverage=1 00:21:52.979 --rc genhtml_legend=1 00:21:52.979 --rc geninfo_all_blocks=1 00:21:52.979 --rc geninfo_unexecuted_blocks=1 00:21:52.979 00:21:52.979 ' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.979 --rc genhtml_branch_coverage=1 00:21:52.979 --rc genhtml_function_coverage=1 00:21:52.979 --rc genhtml_legend=1 00:21:52.979 --rc geninfo_all_blocks=1 00:21:52.979 --rc geninfo_unexecuted_blocks=1 00:21:52.979 00:21:52.979 ' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.979 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.980 17:58:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.246 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:58.247 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:58.247 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:58.247 Found net devices under 0000:31:00.0: cvl_0_0 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:58.247 Found net devices under 0000:31:00.1: cvl_0_1 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.247 17:58:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.247 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.247 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.247 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.247 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:21:58.506 00:21:58.506 --- 10.0.0.2 ping statistics --- 00:21:58.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.506 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:21:58.506 00:21:58.506 --- 10.0.0.1 ping statistics --- 00:21:58.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.506 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3117344 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3117344 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3117344 ']' 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.506 17:58:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.506 [2024-12-06 17:58:46.233659] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:58.506 [2024-12-06 17:58:46.233722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.506 [2024-12-06 17:58:46.325408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.765 [2024-12-06 17:58:46.380664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.765 [2024-12-06 17:58:46.380718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.765 [2024-12-06 17:58:46.380727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.765 [2024-12-06 17:58:46.380735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.765 [2024-12-06 17:58:46.380741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.765 [2024-12-06 17:58:46.383133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.765 [2024-12-06 17:58:46.383252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.765 [2024-12-06 17:58:46.383399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.765 [2024-12-06 17:58:46.383400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 [2024-12-06 17:58:47.032369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 Malloc0 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 [2024-12-06 17:58:47.116887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.332 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.332 [ 00:21:59.332 { 00:21:59.332 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:59.332 "subtype": "Discovery", 00:21:59.332 "listen_addresses": [ 00:21:59.332 { 00:21:59.332 "trtype": "TCP", 00:21:59.332 "adrfam": "IPv4", 00:21:59.332 "traddr": "10.0.0.2", 00:21:59.332 "trsvcid": "4420" 00:21:59.332 } 00:21:59.332 ], 00:21:59.333 "allow_any_host": true, 00:21:59.333 "hosts": [] 00:21:59.333 }, 00:21:59.333 { 00:21:59.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.333 "subtype": "NVMe", 00:21:59.333 "listen_addresses": [ 00:21:59.333 { 00:21:59.333 "trtype": "TCP", 00:21:59.333 "adrfam": "IPv4", 00:21:59.333 "traddr": "10.0.0.2", 00:21:59.333 "trsvcid": "4420" 00:21:59.333 } 00:21:59.333 ], 00:21:59.333 "allow_any_host": true, 00:21:59.333 "hosts": [], 00:21:59.333 "serial_number": "SPDK00000000000001", 00:21:59.333 "model_number": "SPDK bdev Controller", 00:21:59.333 "max_namespaces": 32, 00:21:59.333 "min_cntlid": 1, 00:21:59.333 "max_cntlid": 65519, 00:21:59.333 "namespaces": [ 00:21:59.333 { 00:21:59.333 "nsid": 1, 00:21:59.333 "bdev_name": "Malloc0", 00:21:59.333 "name": "Malloc0", 00:21:59.333 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:59.333 "eui64": "ABCDEF0123456789", 00:21:59.333 "uuid": "c875eec2-bb3d-4787-b45a-6c3821eac7a1" 00:21:59.333 } 00:21:59.333 ] 00:21:59.333 } 00:21:59.333 ] 00:21:59.333 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.333 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:59.333 [2024-12-06 17:58:47.152506] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:59.333 [2024-12-06 17:58:47.152537] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117675 ] 00:21:59.593 [2024-12-06 17:58:47.203075] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:59.593 [2024-12-06 17:58:47.203134] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:59.593 [2024-12-06 17:58:47.203140] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:59.593 [2024-12-06 17:58:47.203156] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:59.593 [2024-12-06 17:58:47.203166] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:59.593 [2024-12-06 17:58:47.207383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:59.593 [2024-12-06 17:58:47.207419] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dd8550 0 00:21:59.593 [2024-12-06 17:58:47.215115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:59.593 [2024-12-06 17:58:47.215128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:59.593 [2024-12-06 17:58:47.215133] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:59.593 [2024-12-06 17:58:47.215137] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:59.593 [2024-12-06 17:58:47.215171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.215177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.215182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.593 [2024-12-06 17:58:47.215196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:59.593 [2024-12-06 17:58:47.215214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.593 [2024-12-06 17:58:47.223113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.593 [2024-12-06 17:58:47.223123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.593 [2024-12-06 17:58:47.223130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.593 [2024-12-06 17:58:47.223146] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:59.593 [2024-12-06 17:58:47.223153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:59.593 [2024-12-06 17:58:47.223158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:59.593 [2024-12-06 17:58:47.223173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.593 [2024-12-06 17:58:47.223189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.593 [2024-12-06 17:58:47.223203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.593 [2024-12-06 17:58:47.223422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.593 [2024-12-06 17:58:47.223429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.593 [2024-12-06 17:58:47.223432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.593 [2024-12-06 17:58:47.223444] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:59.593 [2024-12-06 17:58:47.223452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:59.593 [2024-12-06 17:58:47.223459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.593 [2024-12-06 17:58:47.223473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.593 [2024-12-06 17:58:47.223484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.593 [2024-12-06 17:58:47.223676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.593 [2024-12-06 17:58:47.223682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.593 [2024-12-06 17:58:47.223686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.593 [2024-12-06 17:58:47.223690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.593 [2024-12-06 17:58:47.223695] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:59.594 [2024-12-06 17:58:47.223703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:59.594 [2024-12-06 17:58:47.223710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.223714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.223718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.223724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.594 [2024-12-06 17:58:47.223734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.594 [2024-12-06 17:58:47.223945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.594 [2024-12-06 17:58:47.223952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.594 [2024-12-06 17:58:47.223959] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.223963] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.594 [2024-12-06 17:58:47.223969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:59.594 [2024-12-06 17:58:47.223978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.223982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.223986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.223992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.594 [2024-12-06 17:58:47.224002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.594 [2024-12-06 17:58:47.224162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.594 [2024-12-06 17:58:47.224169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.594 [2024-12-06 17:58:47.224172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.594 [2024-12-06 17:58:47.224181] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:59.594 [2024-12-06 17:58:47.224186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:59.594 [2024-12-06 17:58:47.224194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:59.594 [2024-12-06 17:58:47.224302] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:59.594 [2024-12-06 17:58:47.224307] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:59.594 [2024-12-06 17:58:47.224316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224320] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.224330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.594 [2024-12-06 17:58:47.224341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.594 [2024-12-06 17:58:47.224524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.594 [2024-12-06 17:58:47.224531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.594 [2024-12-06 17:58:47.224534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.594 [2024-12-06 17:58:47.224543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:59.594 [2024-12-06 17:58:47.224552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.224566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.594 [2024-12-06 17:58:47.224576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.594 [2024-12-06 17:58:47.224744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.594 [2024-12-06 17:58:47.224753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.594 [2024-12-06 17:58:47.224756] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224760] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.594 [2024-12-06 17:58:47.224765] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:59.594 [2024-12-06 17:58:47.224770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:59.594 [2024-12-06 17:58:47.224778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:59.594 [2024-12-06 17:58:47.224786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:59.594 [2024-12-06 17:58:47.224795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.224798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.224805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.594 [2024-12-06 17:58:47.224816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.594 [2024-12-06 17:58:47.225006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.594 [2024-12-06 17:58:47.225012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.594 [2024-12-06 17:58:47.225016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.225020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd8550): datao=0, datal=4096, cccid=0 00:21:59.594 [2024-12-06 17:58:47.225025] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e3a100) on tqpair(0x1dd8550): expected_datao=0, payload_size=4096 00:21:59.594 [2024-12-06 17:58:47.225030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.225053] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.225058] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.594 [2024-12-06 17:58:47.265303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.594 [2024-12-06 17:58:47.265307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.594 [2024-12-06 17:58:47.265320] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:59.594 [2024-12-06 17:58:47.265325] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:59.594 [2024-12-06 17:58:47.265329] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:59.594 [2024-12-06 17:58:47.265335] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:59.594 [2024-12-06 17:58:47.265340] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:59.594 [2024-12-06 17:58:47.265345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:59.594 [2024-12-06 17:58:47.265353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:59.594 [2024-12-06 17:58:47.265360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.265379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.594 [2024-12-06 17:58:47.265391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.594 [2024-12-06 17:58:47.265626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.594 [2024-12-06 17:58:47.265633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.594 [2024-12-06 17:58:47.265636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.594 [2024-12-06 17:58:47.265648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.265661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.594 [2024-12-06 17:58:47.265668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.265681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.594 [2024-12-06 17:58:47.265687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dd8550) 00:21:59.594 [2024-12-06 17:58:47.265700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.594 [2024-12-06 17:58:47.265706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.594 [2024-12-06 17:58:47.265710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.265713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.265719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.595 [2024-12-06 17:58:47.265724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:59.595 [2024-12-06 17:58:47.265735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:59.595 [2024-12-06 17:58:47.265742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.265745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.265752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.595 [2024-12-06 17:58:47.265764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a100, cid 0, qid 0 00:21:59.595 [2024-12-06 17:58:47.265769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a280, cid 1, qid 0 00:21:59.595 [2024-12-06 17:58:47.265774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a400, cid 2, qid 0 00:21:59.595 [2024-12-06 17:58:47.265779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.595 [2024-12-06 17:58:47.265784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a700, cid 4, qid 0 00:21:59.595 [2024-12-06 17:58:47.265990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.595 [2024-12-06 17:58:47.265996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.595 [2024-12-06 17:58:47.266000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a700) on tqpair=0x1dd8550 00:21:59.595 [2024-12-06 17:58:47.266009] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:59.595 [2024-12-06 17:58:47.266014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:59.595 [2024-12-06 17:58:47.266025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.266036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.595 [2024-12-06 17:58:47.266046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a700, cid 4, qid 0 00:21:59.595 [2024-12-06 17:58:47.266213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.595 [2024-12-06 17:58:47.266221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.595 [2024-12-06 17:58:47.266224] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266228] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd8550): datao=0, datal=4096, cccid=4 00:21:59.595 [2024-12-06 17:58:47.266233] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e3a700) on tqpair(0x1dd8550): expected_datao=0, payload_size=4096 00:21:59.595 [2024-12-06 17:58:47.266237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266244] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266248] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.595 [2024-12-06 17:58:47.266468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.595 [2024-12-06 17:58:47.266472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a700) on tqpair=0x1dd8550 00:21:59.595 [2024-12-06 17:58:47.266487] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:59.595 [2024-12-06 17:58:47.266512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.266523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.595 [2024-12-06 17:58:47.266530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.266543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.595 [2024-12-06 17:58:47.266557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a700, cid 4, qid 0 00:21:59.595 [2024-12-06 17:58:47.266562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a880, cid 5, qid 0 00:21:59.595 [2024-12-06 17:58:47.266811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.595 [2024-12-06 17:58:47.266818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.595 [2024-12-06 17:58:47.266823] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266827] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd8550): datao=0, datal=1024, cccid=4 00:21:59.595 [2024-12-06 17:58:47.266832] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e3a700) on tqpair(0x1dd8550): expected_datao=0, payload_size=1024 00:21:59.595 [2024-12-06 17:58:47.266836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266843] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266846] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.595 [2024-12-06 17:58:47.266858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.595 [2024-12-06 17:58:47.266862] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.266865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a880) on tqpair=0x1dd8550 00:21:59.595 [2024-12-06 17:58:47.311111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.595 [2024-12-06 17:58:47.311130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.595 [2024-12-06 17:58:47.311134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a700) on tqpair=0x1dd8550 00:21:59.595 [2024-12-06 17:58:47.311151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.311163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.595 [2024-12-06 17:58:47.311180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a700, cid 4, qid 0 00:21:59.595 [2024-12-06 17:58:47.311352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.595 [2024-12-06 17:58:47.311358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.595 [2024-12-06 17:58:47.311362] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311366] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd8550): datao=0, datal=3072, cccid=4 00:21:59.595 [2024-12-06 17:58:47.311370] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e3a700) on tqpair(0x1dd8550): expected_datao=0, payload_size=3072 00:21:59.595 [2024-12-06 17:58:47.311375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311382] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311385] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.595 [2024-12-06 17:58:47.311571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.595 [2024-12-06 17:58:47.311575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a700) on tqpair=0x1dd8550 00:21:59.595 [2024-12-06 17:58:47.311587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dd8550) 00:21:59.595 [2024-12-06 17:58:47.311597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.595 [2024-12-06 17:58:47.311611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a700, cid 4, qid 0 00:21:59.595 [2024-12-06 17:58:47.311861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.595 [2024-12-06 17:58:47.311868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.595 [2024-12-06 17:58:47.311871] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311878] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dd8550): datao=0, datal=8, cccid=4 00:21:59.595 [2024-12-06 17:58:47.311883] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e3a700) on tqpair(0x1dd8550): expected_datao=0, payload_size=8 00:21:59.595 [2024-12-06 17:58:47.311887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311894] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.311897] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.352294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.595 [2024-12-06 17:58:47.352303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.595 [2024-12-06 17:58:47.352307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.595 [2024-12-06 17:58:47.352311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a700) on tqpair=0x1dd8550 00:21:59.595 ===================================================== 00:21:59.595 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:59.595 ===================================================== 00:21:59.595 Controller Capabilities/Features 00:21:59.595 ================================ 00:21:59.595 Vendor ID: 0000 00:21:59.595 Subsystem Vendor ID: 0000 00:21:59.595 Serial Number: .................... 00:21:59.595 Model Number: ........................................ 00:21:59.595 Firmware Version: 25.01 00:21:59.595 Recommended Arb Burst: 0 00:21:59.595 IEEE OUI Identifier: 00 00 00 00:21:59.595 Multi-path I/O 00:21:59.595 May have multiple subsystem ports: No 00:21:59.595 May have multiple controllers: No 00:21:59.595 Associated with SR-IOV VF: No 00:21:59.595 Max Data Transfer Size: 131072 00:21:59.595 Max Number of Namespaces: 0 00:21:59.595 Max Number of I/O Queues: 1024 00:21:59.595 NVMe Specification Version (VS): 1.3 00:21:59.596 NVMe Specification Version (Identify): 1.3 00:21:59.596 Maximum Queue Entries: 128 00:21:59.596 Contiguous Queues Required: Yes 00:21:59.596 Arbitration Mechanisms Supported 00:21:59.596 Weighted Round Robin: Not Supported 00:21:59.596 Vendor Specific: Not Supported 00:21:59.596 Reset Timeout: 15000 ms 00:21:59.596 Doorbell Stride: 4 bytes 00:21:59.596 NVM Subsystem Reset: Not Supported 00:21:59.596 Command Sets Supported 00:21:59.596 NVM Command Set: Supported 00:21:59.596 Boot Partition: Not Supported 00:21:59.596 Memory Page Size Minimum: 4096 bytes 00:21:59.596 Memory Page Size Maximum: 4096 bytes 00:21:59.596 Persistent Memory Region: Not Supported 00:21:59.596 Optional Asynchronous Events Supported 00:21:59.596 Namespace Attribute Notices: Not Supported 00:21:59.596 Firmware Activation Notices: Not Supported 00:21:59.596 ANA Change Notices: Not Supported 00:21:59.596 PLE Aggregate Log Change Notices: Not Supported 00:21:59.596 LBA Status Info Alert Notices: Not Supported 00:21:59.596 EGE Aggregate Log Change Notices: Not Supported 00:21:59.596 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.596 Zone Descriptor Change Notices: Not Supported 00:21:59.596 Discovery Log Change Notices: Supported 00:21:59.596 Controller Attributes 00:21:59.596 128-bit Host Identifier: Not Supported 00:21:59.596 Non-Operational Permissive Mode: Not Supported 00:21:59.596 NVM Sets: Not Supported 00:21:59.596 Read Recovery Levels: Not Supported 00:21:59.596 Endurance Groups: Not Supported 00:21:59.596 Predictable Latency Mode: Not Supported 00:21:59.596 Traffic Based Keep ALive: Not Supported 00:21:59.596 Namespace Granularity: Not Supported 00:21:59.596 SQ Associations: Not Supported 00:21:59.596 UUID List: Not Supported 00:21:59.596 Multi-Domain Subsystem: Not Supported 00:21:59.596 Fixed Capacity Management: Not Supported 00:21:59.596 Variable Capacity Management: Not Supported 00:21:59.596 Delete Endurance Group: Not Supported 00:21:59.596 Delete NVM Set: Not Supported 00:21:59.596 Extended LBA Formats Supported: Not Supported 00:21:59.596 Flexible Data Placement Supported: Not Supported 00:21:59.596 00:21:59.596 Controller Memory Buffer Support 00:21:59.596 ================================ 00:21:59.596 Supported: No 00:21:59.596 00:21:59.596 Persistent Memory Region Support 00:21:59.596 ================================ 00:21:59.596 Supported: No 00:21:59.596 00:21:59.596 Admin Command Set Attributes 00:21:59.596 ============================ 00:21:59.596 Security Send/Receive: Not Supported 00:21:59.596 Format NVM: Not Supported 00:21:59.596 Firmware Activate/Download: Not Supported 00:21:59.596 Namespace Management: Not Supported 00:21:59.596 Device Self-Test: Not Supported 00:21:59.596 Directives: Not Supported 00:21:59.596 NVMe-MI: Not Supported 00:21:59.596 Virtualization Management: Not Supported 00:21:59.596 Doorbell Buffer Config: Not Supported 00:21:59.596 Get LBA Status Capability: Not Supported 00:21:59.596 Command & Feature Lockdown Capability: Not Supported 00:21:59.596 Abort Command Limit: 1 00:21:59.596 Async Event Request Limit: 4 00:21:59.596 Number of Firmware Slots: N/A 00:21:59.596 Firmware Slot 1 Read-Only: N/A 00:21:59.596 Firmware Activation Without Reset: N/A 00:21:59.596 Multiple Update Detection Support: N/A 00:21:59.596 Firmware Update Granularity: No Information Provided 00:21:59.596 Per-Namespace SMART Log: No 00:21:59.596 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.596 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:59.596 Command Effects Log Page: Not Supported 00:21:59.596 Get Log Page Extended Data: Supported 00:21:59.596 Telemetry Log Pages: Not Supported 00:21:59.596 Persistent Event Log Pages: Not Supported 00:21:59.596 Supported Log Pages Log Page: May Support 00:21:59.596 Commands Supported & Effects Log Page: Not Supported 00:21:59.596 Feature Identifiers & Effects Log Page:May Support 00:21:59.596 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.596 Data Area 4 for Telemetry Log: Not Supported 00:21:59.596 Error Log Page Entries Supported: 128 00:21:59.596 Keep Alive: Not Supported 00:21:59.596 00:21:59.596 NVM Command Set Attributes 00:21:59.596 ========================== 00:21:59.596 Submission Queue Entry Size 00:21:59.596 Max: 1 00:21:59.596 Min: 1 00:21:59.596 Completion Queue Entry Size 00:21:59.596 Max: 1 00:21:59.596 Min: 1 00:21:59.596 Number of Namespaces: 0 00:21:59.596 Compare Command: Not Supported 00:21:59.596 Write Uncorrectable Command: Not Supported 00:21:59.596 Dataset Management Command: Not Supported 00:21:59.596 Write Zeroes Command: Not Supported 00:21:59.596 Set Features Save Field: Not Supported 00:21:59.596 Reservations: Not Supported 00:21:59.596 Timestamp: Not Supported 00:21:59.596 Copy: Not Supported 00:21:59.596 Volatile Write Cache: Not Present 00:21:59.596 Atomic Write Unit (Normal): 1 00:21:59.596 Atomic Write Unit (PFail): 1 00:21:59.596 Atomic Compare & Write Unit: 1 00:21:59.596 Fused Compare & Write: Supported 00:21:59.596 Scatter-Gather List 00:21:59.596 SGL Command Set: Supported 00:21:59.596 SGL Keyed: Supported 00:21:59.596 SGL Bit Bucket Descriptor: Not Supported 00:21:59.596 SGL Metadata Pointer: Not Supported 00:21:59.596 Oversized SGL: Not Supported 00:21:59.596 SGL Metadata Address: Not Supported 00:21:59.596 SGL Offset: Supported 00:21:59.596 Transport SGL Data Block: Not Supported 00:21:59.596 Replay Protected Memory Block: Not Supported 00:21:59.596 00:21:59.596 Firmware Slot Information 00:21:59.596 ========================= 00:21:59.596 Active slot: 0 00:21:59.596 00:21:59.596 00:21:59.596 Error Log 00:21:59.596 ========= 00:21:59.596 00:21:59.596 Active Namespaces 00:21:59.596 ================= 00:21:59.596 Discovery Log Page 00:21:59.596 ================== 00:21:59.596 Generation Counter: 2 00:21:59.596 Number of Records: 2 00:21:59.596 Record Format: 0 00:21:59.596 00:21:59.596 Discovery Log Entry 0 00:21:59.596 ---------------------- 00:21:59.596 Transport Type: 3 (TCP) 00:21:59.596 Address Family: 1 (IPv4) 00:21:59.596 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:59.596 Entry Flags: 00:21:59.596 Duplicate Returned Information: 1 00:21:59.596 Explicit Persistent Connection Support for Discovery: 1 00:21:59.596 Transport Requirements: 00:21:59.596 Secure Channel: Not Required 00:21:59.596 Port ID: 0 (0x0000) 00:21:59.596 Controller ID: 65535 (0xffff) 00:21:59.596 Admin Max SQ Size: 128 00:21:59.596 Transport Service Identifier: 4420 00:21:59.596 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:59.596 Transport Address: 10.0.0.2 00:21:59.596 Discovery Log Entry 1 00:21:59.596 ---------------------- 00:21:59.596 Transport Type: 3 (TCP) 00:21:59.596 Address Family: 1 (IPv4) 00:21:59.596 Subsystem Type: 2 (NVM Subsystem) 00:21:59.596 Entry Flags: 00:21:59.596 Duplicate Returned Information: 0 00:21:59.596 Explicit Persistent Connection Support for Discovery: 0 00:21:59.596 Transport Requirements: 00:21:59.596 Secure Channel: Not Required 00:21:59.596 Port ID: 0 (0x0000) 00:21:59.596 Controller ID: 65535 (0xffff) 00:21:59.596 Admin Max SQ Size: 128 00:21:59.597 Transport Service Identifier: 4420 00:21:59.597 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:59.597 Transport Address: 10.0.0.2 [2024-12-06 17:58:47.352400] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:59.597 [2024-12-06 17:58:47.352411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a100) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.352418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.597 [2024-12-06 17:58:47.352423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a280) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.352428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.597 [2024-12-06 17:58:47.352433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a400) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.352438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.597 [2024-12-06 17:58:47.352443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.352447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.597 [2024-12-06 17:58:47.352456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.352471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.352484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.352573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.352580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.352584] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.352594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.352609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.352621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.352838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.352844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.352848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.352860] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:59.597 [2024-12-06 17:58:47.352865] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:59.597 [2024-12-06 17:58:47.352874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.352882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.352889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.352899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.353087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.353093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.353097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.353116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.353131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.353141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.353359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.353366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.353369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.353383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.353397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.353407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.353588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.353594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.353598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.353611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.353625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.353635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.353838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.353847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.353850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.353864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.353871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.353878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.353888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.354098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.354109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.354112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.354116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.354126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.354130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.354134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.354140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.354150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.354338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.597 [2024-12-06 17:58:47.354345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.597 [2024-12-06 17:58:47.354348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.354352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.597 [2024-12-06 17:58:47.354361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.354365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.597 [2024-12-06 17:58:47.354369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.597 [2024-12-06 17:58:47.354376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.597 [2024-12-06 17:58:47.354385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.597 [2024-12-06 17:58:47.354576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.598 [2024-12-06 17:58:47.354582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.598 [2024-12-06 17:58:47.354586] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.354590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.598 [2024-12-06 17:58:47.354599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.354603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.354607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.598 [2024-12-06 17:58:47.354613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-12-06 17:58:47.354623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.598 [2024-12-06 17:58:47.354830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.598 [2024-12-06 17:58:47.354836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.598 [2024-12-06 17:58:47.354842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.354846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.598 [2024-12-06 17:58:47.354855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.354859] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.354863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.598 [2024-12-06 17:58:47.354870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-12-06 17:58:47.354879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.598 [2024-12-06 17:58:47.355045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.598 [2024-12-06 17:58:47.355052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.598 [2024-12-06 17:58:47.355055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.355059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.598 [2024-12-06 17:58:47.355068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.355072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.355076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dd8550) 00:21:59.598 [2024-12-06 17:58:47.355083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.598 [2024-12-06 17:58:47.355092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e3a580, cid 3, qid 0 00:21:59.598 [2024-12-06 17:58:47.359108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.598 [2024-12-06 17:58:47.359116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.598 [2024-12-06 17:58:47.359120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.598 [2024-12-06 17:58:47.359124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e3a580) on tqpair=0x1dd8550 00:21:59.598 [2024-12-06 17:58:47.359131] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:21:59.598 00:21:59.598 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:59.598 [2024-12-06 17:58:47.384577] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:21:59.598 [2024-12-06 17:58:47.384608] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117696 ] 00:21:59.861 [2024-12-06 17:58:47.437191] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:59.861 [2024-12-06 17:58:47.437239] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:59.861 [2024-12-06 17:58:47.437244] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:59.861 [2024-12-06 17:58:47.437258] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:59.861 [2024-12-06 17:58:47.437266] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:59.861 [2024-12-06 17:58:47.437795] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:59.861 [2024-12-06 17:58:47.437832] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x55d550 0 00:21:59.861 [2024-12-06 17:58:47.448113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:59.861 [2024-12-06 17:58:47.448125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:59.861 [2024-12-06 17:58:47.448130] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:59.861 [2024-12-06 17:58:47.448133] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:59.861 [2024-12-06 17:58:47.448163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.448169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.448173] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.861 [2024-12-06 17:58:47.448184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:59.861 [2024-12-06 17:58:47.448201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.861 [2024-12-06 17:58:47.456112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.861 [2024-12-06 17:58:47.456122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.861 [2024-12-06 17:58:47.456125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.861 [2024-12-06 17:58:47.456138] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:59.861 [2024-12-06 17:58:47.456145] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:59.861 [2024-12-06 17:58:47.456150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:59.861 [2024-12-06 17:58:47.456164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.861 [2024-12-06 17:58:47.456180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-12-06 17:58:47.456193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.861 [2024-12-06 17:58:47.456377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.861 [2024-12-06 17:58:47.456383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.861 [2024-12-06 17:58:47.456387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.861 [2024-12-06 17:58:47.456398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:59.861 [2024-12-06 17:58:47.456406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:59.861 [2024-12-06 17:58:47.456413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.861 [2024-12-06 17:58:47.456427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-12-06 17:58:47.456438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.861 [2024-12-06 17:58:47.456636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.861 [2024-12-06 17:58:47.456642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.861 [2024-12-06 17:58:47.456646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.861 [2024-12-06 17:58:47.456658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:59.861 [2024-12-06 17:58:47.456666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:59.861 [2024-12-06 17:58:47.456673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.861 [2024-12-06 17:58:47.456687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.861 [2024-12-06 17:58:47.456697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.861 [2024-12-06 17:58:47.456888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.861 [2024-12-06 17:58:47.456894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.861 [2024-12-06 17:58:47.456898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.861 [2024-12-06 17:58:47.456902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.861 [2024-12-06 17:58:47.456907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:59.861 [2024-12-06 17:58:47.456916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.456920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.456924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.456931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-12-06 17:58:47.456941] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.862 [2024-12-06 17:58:47.457156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.862 [2024-12-06 17:58:47.457163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.862 [2024-12-06 17:58:47.457167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.862 [2024-12-06 17:58:47.457176] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:59.862 [2024-12-06 17:58:47.457181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:59.862 [2024-12-06 17:58:47.457189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:59.862 [2024-12-06 17:58:47.457297] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:59.862 [2024-12-06 17:58:47.457301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:59.862 [2024-12-06 17:58:47.457309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457313] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.457323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-12-06 17:58:47.457334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.862 [2024-12-06 17:58:47.457494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.862 [2024-12-06 17:58:47.457503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.862 [2024-12-06 17:58:47.457506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.862 [2024-12-06 17:58:47.457515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:59.862 [2024-12-06 17:58:47.457524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457528] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.457539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-12-06 17:58:47.457549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.862 [2024-12-06 17:58:47.457708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.862 [2024-12-06 17:58:47.457715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.862 [2024-12-06 17:58:47.457718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.862 [2024-12-06 17:58:47.457727] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:59.862 [2024-12-06 17:58:47.457732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:59.862 [2024-12-06 17:58:47.457739] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:59.862 [2024-12-06 17:58:47.457752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:59.862 [2024-12-06 17:58:47.457761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.457765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.457772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-12-06 17:58:47.457782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.862 [2024-12-06 17:58:47.458008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.862 [2024-12-06 17:58:47.458015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.862 [2024-12-06 17:58:47.458018] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458022] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=4096, cccid=0 00:21:59.862 [2024-12-06 17:58:47.458027] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bf100) on tqpair(0x55d550): expected_datao=0, payload_size=4096 00:21:59.862 [2024-12-06 17:58:47.458031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458039] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458043] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.862 [2024-12-06 17:58:47.458225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.862 [2024-12-06 17:58:47.458228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.862 [2024-12-06 17:58:47.458239] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:59.862 [2024-12-06 17:58:47.458247] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:59.862 [2024-12-06 17:58:47.458251] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:59.862 [2024-12-06 17:58:47.458255] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:59.862 [2024-12-06 17:58:47.458260] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:59.862 [2024-12-06 17:58:47.458265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:59.862 [2024-12-06 17:58:47.458273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:59.862 [2024-12-06 17:58:47.458280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458287] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.458295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.862 [2024-12-06 17:58:47.458306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.862 [2024-12-06 17:58:47.458499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.862 [2024-12-06 17:58:47.458505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.862 [2024-12-06 17:58:47.458509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.862 [2024-12-06 17:58:47.458519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458523] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.458533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.862 [2024-12-06 17:58:47.458540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458547] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.458553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.862 [2024-12-06 17:58:47.458559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.458572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.862 [2024-12-06 17:58:47.458578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.458591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.862 [2024-12-06 17:58:47.458596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:59.862 [2024-12-06 17:58:47.458606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:59.862 [2024-12-06 17:58:47.458614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.862 [2024-12-06 17:58:47.458618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.862 [2024-12-06 17:58:47.458625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.862 [2024-12-06 17:58:47.458637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf100, cid 0, qid 0 00:21:59.862 [2024-12-06 17:58:47.458642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf280, cid 1, qid 0 00:21:59.862 [2024-12-06 17:58:47.458647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf400, cid 2, qid 0 00:21:59.862 [2024-12-06 17:58:47.458652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.862 [2024-12-06 17:58:47.458657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.862 [2024-12-06 17:58:47.458868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.862 [2024-12-06 17:58:47.458875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.458878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.458882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.458887] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:59.863 [2024-12-06 17:58:47.458892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.458902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.458908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.458915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.458919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.458922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.863 [2024-12-06 17:58:47.458929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.863 [2024-12-06 17:58:47.458939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.863 [2024-12-06 17:58:47.459148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.863 [2024-12-06 17:58:47.459155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.459159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459163] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.459227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.459236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.459243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.863 [2024-12-06 17:58:47.459254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.863 [2024-12-06 17:58:47.459264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.863 [2024-12-06 17:58:47.459478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.863 [2024-12-06 17:58:47.459487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.863 [2024-12-06 17:58:47.459490] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459494] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=4096, cccid=4 00:21:59.863 [2024-12-06 17:58:47.459499] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bf700) on tqpair(0x55d550): expected_datao=0, payload_size=4096 00:21:59.863 [2024-12-06 17:58:47.459503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459523] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.863 [2024-12-06 17:58:47.459707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.459710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.459725] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:59.863 [2024-12-06 17:58:47.459733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.459742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.459749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.459752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.863 [2024-12-06 17:58:47.459759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.863 [2024-12-06 17:58:47.459770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.863 [2024-12-06 17:58:47.459994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.863 [2024-12-06 17:58:47.460000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.863 [2024-12-06 17:58:47.460004] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.460007] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=4096, cccid=4 00:21:59.863 [2024-12-06 17:58:47.460012] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bf700) on tqpair(0x55d550): expected_datao=0, payload_size=4096 00:21:59.863 [2024-12-06 17:58:47.460016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.460031] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.460036] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.863 [2024-12-06 17:58:47.464118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.464121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.464136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464156] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.863 [2024-12-06 17:58:47.464163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.863 [2024-12-06 17:58:47.464178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.863 [2024-12-06 17:58:47.464343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.863 [2024-12-06 17:58:47.464350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.863 [2024-12-06 17:58:47.464353] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464357] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=4096, cccid=4 00:21:59.863 [2024-12-06 17:58:47.464361] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bf700) on tqpair(0x55d550): expected_datao=0, payload_size=4096 00:21:59.863 [2024-12-06 17:58:47.464365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464403] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464407] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.863 [2024-12-06 17:58:47.464582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.464585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.464599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464626] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464637] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:59.863 [2024-12-06 17:58:47.464642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:59.863 [2024-12-06 17:58:47.464647] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:59.863 [2024-12-06 17:58:47.464661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.863 [2024-12-06 17:58:47.464672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.863 [2024-12-06 17:58:47.464679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464683] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55d550) 00:21:59.863 [2024-12-06 17:58:47.464692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:59.863 [2024-12-06 17:58:47.464706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.863 [2024-12-06 17:58:47.464711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf880, cid 5, qid 0 00:21:59.863 [2024-12-06 17:58:47.464928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.863 [2024-12-06 17:58:47.464934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.464938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.464949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.863 [2024-12-06 17:58:47.464954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.863 [2024-12-06 17:58:47.464958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf880) on tqpair=0x55d550 00:21:59.863 [2024-12-06 17:58:47.464970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.863 [2024-12-06 17:58:47.464974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.464981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.464991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf880, cid 5, qid 0 00:21:59.864 [2024-12-06 17:58:47.465162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.465169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.465173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf880) on tqpair=0x55d550 00:21:59.864 [2024-12-06 17:58:47.465186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.465197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.465207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf880, cid 5, qid 0 00:21:59.864 [2024-12-06 17:58:47.465402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.465408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.465411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf880) on tqpair=0x55d550 00:21:59.864 [2024-12-06 17:58:47.465424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.465435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.465444] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf880, cid 5, qid 0 00:21:59.864 [2024-12-06 17:58:47.465639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.465646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.465649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf880) on tqpair=0x55d550 00:21:59.864 [2024-12-06 17:58:47.465667] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.465678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.465685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.465699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.465706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.465716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.465724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.465727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x55d550) 00:21:59.864 [2024-12-06 17:58:47.465733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.864 [2024-12-06 17:58:47.465745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf880, cid 5, qid 0 00:21:59.864 [2024-12-06 17:58:47.465750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf700, cid 4, qid 0 00:21:59.864 [2024-12-06 17:58:47.465755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bfa00, cid 6, qid 0 00:21:59.864 [2024-12-06 17:58:47.465760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bfb80, cid 7, qid 0 00:21:59.864 [2024-12-06 17:58:47.466055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.864 [2024-12-06 17:58:47.466061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.864 [2024-12-06 17:58:47.466065] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466069] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=8192, cccid=5 00:21:59.864 [2024-12-06 17:58:47.466073] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bf880) on tqpair(0x55d550): expected_datao=0, payload_size=8192 00:21:59.864 [2024-12-06 17:58:47.466077] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466149] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466154] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.864 [2024-12-06 17:58:47.466166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.864 [2024-12-06 17:58:47.466169] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466173] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=512, cccid=4 00:21:59.864 [2024-12-06 17:58:47.466177] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bf700) on tqpair(0x55d550): expected_datao=0, payload_size=512 00:21:59.864 [2024-12-06 17:58:47.466181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466188] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466191] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.864 [2024-12-06 17:58:47.466203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.864 [2024-12-06 17:58:47.466206] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466210] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=512, cccid=6 00:21:59.864 [2024-12-06 17:58:47.466214] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bfa00) on tqpair(0x55d550): expected_datao=0, payload_size=512 00:21:59.864 [2024-12-06 17:58:47.466219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466225] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466231] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:59.864 [2024-12-06 17:58:47.466243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:59.864 [2024-12-06 17:58:47.466246] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466249] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55d550): datao=0, datal=4096, cccid=7 00:21:59.864 [2024-12-06 17:58:47.466254] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5bfb80) on tqpair(0x55d550): expected_datao=0, payload_size=4096 00:21:59.864 [2024-12-06 17:58:47.466258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466298] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466302] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.466523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.466526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf880) on tqpair=0x55d550 00:21:59.864 [2024-12-06 17:58:47.466542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.466548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.466552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf700) on tqpair=0x55d550 00:21:59.864 [2024-12-06 17:58:47.466566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.466572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.466575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bfa00) on tqpair=0x55d550 00:21:59.864 [2024-12-06 17:58:47.466586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.864 [2024-12-06 17:58:47.466592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.864 [2024-12-06 17:58:47.466596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.864 [2024-12-06 17:58:47.466600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bfb80) on tqpair=0x55d550 00:21:59.864 ===================================================== 00:21:59.864 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:59.864 ===================================================== 00:21:59.864 Controller Capabilities/Features 00:21:59.864 ================================ 00:21:59.864 Vendor ID: 8086 00:21:59.864 Subsystem Vendor ID: 8086 00:21:59.864 Serial Number: SPDK00000000000001 00:21:59.864 Model Number: SPDK bdev Controller 00:21:59.864 Firmware Version: 25.01 00:21:59.864 Recommended Arb Burst: 6 00:21:59.864 IEEE OUI Identifier: e4 d2 5c 00:21:59.864 Multi-path I/O 00:21:59.864 May have multiple subsystem ports: Yes 00:21:59.864 May have multiple controllers: Yes 00:21:59.864 Associated with SR-IOV VF: No 00:21:59.864 Max Data Transfer Size: 131072 00:21:59.864 Max Number of Namespaces: 32 00:21:59.864 Max Number of I/O Queues: 127 00:21:59.864 NVMe Specification Version (VS): 1.3 00:21:59.864 NVMe Specification Version (Identify): 1.3 00:21:59.864 Maximum Queue Entries: 128 00:21:59.864 Contiguous Queues Required: Yes 00:21:59.864 Arbitration Mechanisms Supported 00:21:59.864 Weighted Round Robin: Not Supported 00:21:59.864 Vendor Specific: Not Supported 00:21:59.864 Reset Timeout: 15000 ms 00:21:59.864 Doorbell Stride: 4 bytes 00:21:59.864 NVM Subsystem Reset: Not Supported 00:21:59.864 Command Sets Supported 00:21:59.864 NVM Command Set: Supported 00:21:59.864 Boot Partition: Not Supported 00:21:59.864 Memory Page Size Minimum: 4096 bytes 00:21:59.864 Memory Page Size Maximum: 4096 bytes 00:21:59.864 Persistent Memory Region: Not Supported 00:21:59.864 Optional Asynchronous Events Supported 00:21:59.865 Namespace Attribute Notices: Supported 00:21:59.865 Firmware Activation Notices: Not Supported 00:21:59.865 ANA Change Notices: Not Supported 00:21:59.865 PLE Aggregate Log Change Notices: Not Supported 00:21:59.865 LBA Status Info Alert Notices: Not Supported 00:21:59.865 EGE Aggregate Log Change Notices: Not Supported 00:21:59.865 Normal NVM Subsystem Shutdown event: Not Supported 00:21:59.865 Zone Descriptor Change Notices: Not Supported 00:21:59.865 Discovery Log Change Notices: Not Supported 00:21:59.865 Controller Attributes 00:21:59.865 128-bit Host Identifier: Supported 00:21:59.865 Non-Operational Permissive Mode: Not Supported 00:21:59.865 NVM Sets: Not Supported 00:21:59.865 Read Recovery Levels: Not Supported 00:21:59.865 Endurance Groups: Not Supported 00:21:59.865 Predictable Latency Mode: Not Supported 00:21:59.865 Traffic Based Keep ALive: Not Supported 00:21:59.865 Namespace Granularity: Not Supported 00:21:59.865 SQ Associations: Not Supported 00:21:59.865 UUID List: Not Supported 00:21:59.865 Multi-Domain Subsystem: Not Supported 00:21:59.865 Fixed Capacity Management: Not Supported 00:21:59.865 Variable Capacity Management: Not Supported 00:21:59.865 Delete Endurance Group: Not Supported 00:21:59.865 Delete NVM Set: Not Supported 00:21:59.865 Extended LBA Formats Supported: Not Supported 00:21:59.865 Flexible Data Placement Supported: Not Supported 00:21:59.865 00:21:59.865 Controller Memory Buffer Support 00:21:59.865 ================================ 00:21:59.865 Supported: No 00:21:59.865 00:21:59.865 Persistent Memory Region Support 00:21:59.865 ================================ 00:21:59.865 Supported: No 00:21:59.865 00:21:59.865 Admin Command Set Attributes 00:21:59.865 ============================ 00:21:59.865 Security Send/Receive: Not Supported 00:21:59.865 Format NVM: Not Supported 00:21:59.865 Firmware Activate/Download: Not Supported 00:21:59.865 Namespace Management: Not Supported 00:21:59.865 Device Self-Test: Not Supported 00:21:59.865 Directives: Not Supported 00:21:59.865 NVMe-MI: Not Supported 00:21:59.865 Virtualization Management: Not Supported 00:21:59.865 Doorbell Buffer Config: Not Supported 00:21:59.865 Get LBA Status Capability: Not Supported 00:21:59.865 Command & Feature Lockdown Capability: Not Supported 00:21:59.865 Abort Command Limit: 4 00:21:59.865 Async Event Request Limit: 4 00:21:59.865 Number of Firmware Slots: N/A 00:21:59.865 Firmware Slot 1 Read-Only: N/A 00:21:59.865 Firmware Activation Without Reset: N/A 00:21:59.865 Multiple Update Detection Support: N/A 00:21:59.865 Firmware Update Granularity: No Information Provided 00:21:59.865 Per-Namespace SMART Log: No 00:21:59.865 Asymmetric Namespace Access Log Page: Not Supported 00:21:59.865 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:59.865 Command Effects Log Page: Supported 00:21:59.865 Get Log Page Extended Data: Supported 00:21:59.865 Telemetry Log Pages: Not Supported 00:21:59.865 Persistent Event Log Pages: Not Supported 00:21:59.865 Supported Log Pages Log Page: May Support 00:21:59.865 Commands Supported & Effects Log Page: Not Supported 00:21:59.865 Feature Identifiers & Effects Log Page:May Support 00:21:59.865 NVMe-MI Commands & Effects Log Page: May Support 00:21:59.865 Data Area 4 for Telemetry Log: Not Supported 00:21:59.865 Error Log Page Entries Supported: 128 00:21:59.865 Keep Alive: Supported 00:21:59.865 Keep Alive Granularity: 10000 ms 00:21:59.865 00:21:59.865 NVM Command Set Attributes 00:21:59.865 ========================== 00:21:59.865 Submission Queue Entry Size 00:21:59.865 Max: 64 00:21:59.865 Min: 64 00:21:59.865 Completion Queue Entry Size 00:21:59.865 Max: 16 00:21:59.865 Min: 16 00:21:59.865 Number of Namespaces: 32 00:21:59.865 Compare Command: Supported 00:21:59.865 Write Uncorrectable Command: Not Supported 00:21:59.865 Dataset Management Command: Supported 00:21:59.865 Write Zeroes Command: Supported 00:21:59.865 Set Features Save Field: Not Supported 00:21:59.865 Reservations: Supported 00:21:59.865 Timestamp: Not Supported 00:21:59.865 Copy: Supported 00:21:59.865 Volatile Write Cache: Present 00:21:59.865 Atomic Write Unit (Normal): 1 00:21:59.865 Atomic Write Unit (PFail): 1 00:21:59.865 Atomic Compare & Write Unit: 1 00:21:59.865 Fused Compare & Write: Supported 00:21:59.865 Scatter-Gather List 00:21:59.865 SGL Command Set: Supported 00:21:59.865 SGL Keyed: Supported 00:21:59.865 SGL Bit Bucket Descriptor: Not Supported 00:21:59.865 SGL Metadata Pointer: Not Supported 00:21:59.865 Oversized SGL: Not Supported 00:21:59.865 SGL Metadata Address: Not Supported 00:21:59.865 SGL Offset: Supported 00:21:59.865 Transport SGL Data Block: Not Supported 00:21:59.865 Replay Protected Memory Block: Not Supported 00:21:59.865 00:21:59.865 Firmware Slot Information 00:21:59.865 ========================= 00:21:59.865 Active slot: 1 00:21:59.865 Slot 1 Firmware Revision: 25.01 00:21:59.865 00:21:59.865 00:21:59.865 Commands Supported and Effects 00:21:59.865 ============================== 00:21:59.865 Admin Commands 00:21:59.865 -------------- 00:21:59.865 Get Log Page (02h): Supported 00:21:59.865 Identify (06h): Supported 00:21:59.865 Abort (08h): Supported 00:21:59.865 Set Features (09h): Supported 00:21:59.865 Get Features (0Ah): Supported 00:21:59.865 Asynchronous Event Request (0Ch): Supported 00:21:59.865 Keep Alive (18h): Supported 00:21:59.865 I/O Commands 00:21:59.865 ------------ 00:21:59.865 Flush (00h): Supported LBA-Change 00:21:59.865 Write (01h): Supported LBA-Change 00:21:59.865 Read (02h): Supported 00:21:59.865 Compare (05h): Supported 00:21:59.865 Write Zeroes (08h): Supported LBA-Change 00:21:59.865 Dataset Management (09h): Supported LBA-Change 00:21:59.865 Copy (19h): Supported LBA-Change 00:21:59.865 00:21:59.865 Error Log 00:21:59.865 ========= 00:21:59.865 00:21:59.865 Arbitration 00:21:59.865 =========== 00:21:59.865 Arbitration Burst: 1 00:21:59.865 00:21:59.865 Power Management 00:21:59.865 ================ 00:21:59.865 Number of Power States: 1 00:21:59.865 Current Power State: Power State #0 00:21:59.865 Power State #0: 00:21:59.865 Max Power: 0.00 W 00:21:59.865 Non-Operational State: Operational 00:21:59.865 Entry Latency: Not Reported 00:21:59.865 Exit Latency: Not Reported 00:21:59.865 Relative Read Throughput: 0 00:21:59.865 Relative Read Latency: 0 00:21:59.865 Relative Write Throughput: 0 00:21:59.865 Relative Write Latency: 0 00:21:59.865 Idle Power: Not Reported 00:21:59.865 Active Power: Not Reported 00:21:59.865 Non-Operational Permissive Mode: Not Supported 00:21:59.865 00:21:59.865 Health Information 00:21:59.865 ================== 00:21:59.865 Critical Warnings: 00:21:59.865 Available Spare Space: OK 00:21:59.865 Temperature: OK 00:21:59.865 Device Reliability: OK 00:21:59.865 Read Only: No 00:21:59.865 Volatile Memory Backup: OK 00:21:59.865 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:59.865 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:59.865 Available Spare: 0% 00:21:59.865 Available Spare Threshold: 0% 00:21:59.865 Life Percentage Used:[2024-12-06 17:58:47.466692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.865 [2024-12-06 17:58:47.466698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x55d550) 00:21:59.865 [2024-12-06 17:58:47.466704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.865 [2024-12-06 17:58:47.466716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bfb80, cid 7, qid 0 00:21:59.865 [2024-12-06 17:58:47.466904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.865 [2024-12-06 17:58:47.466911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.865 [2024-12-06 17:58:47.466915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.865 [2024-12-06 17:58:47.466918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bfb80) on tqpair=0x55d550 00:21:59.865 [2024-12-06 17:58:47.466949] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:59.865 [2024-12-06 17:58:47.466958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf100) on tqpair=0x55d550 00:21:59.865 [2024-12-06 17:58:47.466964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.865 [2024-12-06 17:58:47.466970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf280) on tqpair=0x55d550 00:21:59.865 [2024-12-06 17:58:47.466978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.865 [2024-12-06 17:58:47.466983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf400) on tqpair=0x55d550 00:21:59.865 [2024-12-06 17:58:47.466988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.865 [2024-12-06 17:58:47.466993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.865 [2024-12-06 17:58:47.466997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:59.866 [2024-12-06 17:58:47.467006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.866 [2024-12-06 17:58:47.467020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.866 [2024-12-06 17:58:47.467032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.866 [2024-12-06 17:58:47.467206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.866 [2024-12-06 17:58:47.467213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.866 [2024-12-06 17:58:47.467217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.866 [2024-12-06 17:58:47.467227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.866 [2024-12-06 17:58:47.467242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.866 [2024-12-06 17:58:47.467255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.866 [2024-12-06 17:58:47.467435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.866 [2024-12-06 17:58:47.467442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.866 [2024-12-06 17:58:47.467445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.866 [2024-12-06 17:58:47.467454] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:59.866 [2024-12-06 17:58:47.467459] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:59.866 [2024-12-06 17:58:47.467468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.866 [2024-12-06 17:58:47.467482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.866 [2024-12-06 17:58:47.467493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.866 [2024-12-06 17:58:47.467645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.866 [2024-12-06 17:58:47.467651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.866 [2024-12-06 17:58:47.467655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.866 [2024-12-06 17:58:47.467668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.866 [2024-12-06 17:58:47.467685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.866 [2024-12-06 17:58:47.467695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.866 [2024-12-06 17:58:47.467917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.866 [2024-12-06 17:58:47.467923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.866 [2024-12-06 17:58:47.467927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.866 [2024-12-06 17:58:47.467941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.467948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.866 [2024-12-06 17:58:47.467955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.866 [2024-12-06 17:58:47.467965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.866 [2024-12-06 17:58:47.472109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.866 [2024-12-06 17:58:47.472117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.866 [2024-12-06 17:58:47.472121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.472125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.866 [2024-12-06 17:58:47.472135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.472139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.472142] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55d550) 00:21:59.866 [2024-12-06 17:58:47.472149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:59.866 [2024-12-06 17:58:47.472161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5bf580, cid 3, qid 0 00:21:59.866 [2024-12-06 17:58:47.472340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:59.866 [2024-12-06 17:58:47.472347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:59.866 [2024-12-06 17:58:47.472350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:59.866 [2024-12-06 17:58:47.472354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5bf580) on tqpair=0x55d550 00:21:59.866 [2024-12-06 17:58:47.472362] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:21:59.866 0% 00:21:59.866 Data Units Read: 0 00:21:59.866 Data Units Written: 0 00:21:59.866 Host Read Commands: 0 00:21:59.866 Host Write Commands: 0 00:21:59.866 Controller Busy Time: 0 minutes 00:21:59.866 Power Cycles: 0 00:21:59.866 Power On Hours: 0 hours 00:21:59.866 Unsafe Shutdowns: 0 00:21:59.866 Unrecoverable Media Errors: 0 00:21:59.866 Lifetime Error Log Entries: 0 00:21:59.866 Warning Temperature Time: 0 minutes 00:21:59.866 Critical Temperature Time: 0 minutes 00:21:59.866 00:21:59.866 Number of Queues 00:21:59.866 ================ 00:21:59.866 Number of I/O Submission Queues: 127 00:21:59.866 Number of I/O Completion Queues: 127 00:21:59.866 00:21:59.866 Active Namespaces 00:21:59.866 ================= 00:21:59.866 Namespace ID:1 00:21:59.866 Error Recovery Timeout: Unlimited 00:21:59.866 Command Set Identifier: NVM (00h) 00:21:59.866 Deallocate: Supported 00:21:59.866 Deallocated/Unwritten Error: Not Supported 00:21:59.866 Deallocated Read Value: Unknown 00:21:59.866 Deallocate in Write Zeroes: Not Supported 00:21:59.866 Deallocated Guard Field: 0xFFFF 00:21:59.866 Flush: Supported 00:21:59.866 Reservation: Supported 00:21:59.866 Namespace Sharing Capabilities: Multiple Controllers 00:21:59.866 Size (in LBAs): 131072 (0GiB) 00:21:59.866 Capacity (in LBAs): 131072 (0GiB) 00:21:59.866 Utilization (in LBAs): 131072 (0GiB) 00:21:59.866 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:59.866 EUI64: ABCDEF0123456789 00:21:59.866 UUID: c875eec2-bb3d-4787-b45a-6c3821eac7a1 00:21:59.866 Thin Provisioning: Not Supported 00:21:59.866 Per-NS Atomic Units: Yes 00:21:59.866 Atomic Boundary Size (Normal): 0 00:21:59.866 Atomic Boundary Size (PFail): 0 00:21:59.866 Atomic Boundary Offset: 0 00:21:59.866 Maximum Single Source Range Length: 65535 00:21:59.866 Maximum Copy Length: 65535 00:21:59.866 Maximum Source Range Count: 1 00:21:59.866 NGUID/EUI64 Never Reused: No 00:21:59.866 Namespace Write Protected: No 00:21:59.866 Number of LBA Formats: 1 00:21:59.866 Current LBA Format: LBA Format #00 00:21:59.866 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:59.866 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.866 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.867 rmmod nvme_tcp 00:21:59.867 rmmod nvme_fabrics 00:21:59.867 rmmod nvme_keyring 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3117344 ']' 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3117344 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3117344 ']' 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3117344 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3117344 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3117344' 00:21:59.867 killing process with pid 3117344 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3117344 00:21:59.867 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3117344 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.126 17:58:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.033 00:22:02.033 real 0m9.229s 00:22:02.033 user 0m7.025s 00:22:02.033 sys 0m4.418s 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.033 ************************************ 00:22:02.033 END TEST nvmf_identify 00:22:02.033 ************************************ 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.033 ************************************ 00:22:02.033 START TEST nvmf_perf 00:22:02.033 ************************************ 00:22:02.033 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:02.293 * Looking for test storage... 00:22:02.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.293 --rc genhtml_branch_coverage=1 00:22:02.293 --rc genhtml_function_coverage=1 00:22:02.293 --rc genhtml_legend=1 00:22:02.293 --rc geninfo_all_blocks=1 00:22:02.293 --rc geninfo_unexecuted_blocks=1 00:22:02.293 00:22:02.293 ' 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.293 --rc genhtml_branch_coverage=1 00:22:02.293 --rc genhtml_function_coverage=1 00:22:02.293 --rc genhtml_legend=1 00:22:02.293 --rc geninfo_all_blocks=1 00:22:02.293 --rc geninfo_unexecuted_blocks=1 00:22:02.293 00:22:02.293 ' 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.293 --rc genhtml_branch_coverage=1 00:22:02.293 --rc genhtml_function_coverage=1 00:22:02.293 --rc genhtml_legend=1 00:22:02.293 --rc geninfo_all_blocks=1 00:22:02.293 --rc geninfo_unexecuted_blocks=1 00:22:02.293 00:22:02.293 ' 00:22:02.293 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:02.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.294 --rc genhtml_branch_coverage=1 00:22:02.294 --rc genhtml_function_coverage=1 00:22:02.294 --rc genhtml_legend=1 00:22:02.294 --rc geninfo_all_blocks=1 00:22:02.294 --rc geninfo_unexecuted_blocks=1 00:22:02.294 00:22:02.294 ' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:02.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.294 17:58:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.569 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:07.570 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:07.570 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:07.570 Found net devices under 0000:31:00.0: cvl_0_0 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:07.570 Found net devices under 0000:31:00.1: cvl_0_1 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.570 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:22:07.829 00:22:07.829 --- 10.0.0.2 ping statistics --- 00:22:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.829 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:22:07.829 00:22:07.829 --- 10.0.0.1 ping statistics --- 00:22:07.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.829 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3122043 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3122043 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3122043 ']' 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.829 17:58:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.829 [2024-12-06 17:58:55.579833] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:22:07.829 [2024-12-06 17:58:55.579886] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.089 [2024-12-06 17:58:55.668058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:08.089 [2024-12-06 17:58:55.721646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.089 [2024-12-06 17:58:55.721701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.089 [2024-12-06 17:58:55.721711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.089 [2024-12-06 17:58:55.721719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.089 [2024-12-06 17:58:55.721726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.089 [2024-12-06 17:58:55.724203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.089 [2024-12-06 17:58:55.724555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.089 [2024-12-06 17:58:55.724690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.089 [2024-12-06 17:58:55.724691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:08.658 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:09.227 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:09.227 17:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:09.486 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:09.745 [2024-12-06 17:58:57.419639] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.745 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.006 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:10.006 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.006 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:10.006 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:10.266 17:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.266 [2024-12-06 17:58:58.059011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.266 17:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:10.526 17:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:22:10.526 17:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:22:10.526 17:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:10.526 17:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:22:11.906 Initializing NVMe Controllers 00:22:11.906 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:22:11.906 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:22:11.906 Initialization complete. Launching workers. 00:22:11.906 ======================================================== 00:22:11.906 Latency(us) 00:22:11.906 Device Information : IOPS MiB/s Average min max 00:22:11.906 PCIE (0000:65:00.0) NSID 1 from core 0: 97278.61 379.99 328.23 44.42 4376.89 00:22:11.906 ======================================================== 00:22:11.906 Total : 97278.61 379.99 328.23 44.42 4376.89 00:22:11.906 00:22:11.906 17:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.287 Initializing NVMe Controllers 00:22:13.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:13.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:13.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:13.287 Initialization complete. Launching workers. 00:22:13.287 ======================================================== 00:22:13.287 Latency(us) 00:22:13.287 Device Information : IOPS MiB/s Average min max 00:22:13.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 95.00 0.37 10528.70 251.69 45622.67 00:22:13.287 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.00 0.23 17450.06 7968.23 47893.60 00:22:13.287 ======================================================== 00:22:13.287 Total : 154.00 0.60 13180.39 251.69 47893.60 00:22:13.287 00:22:13.287 17:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.663 Initializing NVMe Controllers 00:22:14.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.663 Initialization complete. Launching workers. 00:22:14.663 ======================================================== 00:22:14.663 Latency(us) 00:22:14.663 Device Information : IOPS MiB/s Average min max 00:22:14.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12096.00 47.25 2649.88 422.45 8175.93 00:22:14.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3776.00 14.75 8512.06 5156.06 16552.42 00:22:14.663 ======================================================== 00:22:14.663 Total : 15872.00 62.00 4044.51 422.45 16552.42 00:22:14.663 00:22:14.663 17:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:14.663 17:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:14.663 17:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:17.198 Initializing NVMe Controllers 00:22:17.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.198 Controller IO queue size 128, less than required. 00:22:17.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.198 Controller IO queue size 128, less than required. 00:22:17.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:17.198 Initialization complete. Launching workers. 00:22:17.198 ======================================================== 00:22:17.198 Latency(us) 00:22:17.198 Device Information : IOPS MiB/s Average min max 00:22:17.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1787.63 446.91 73199.13 41104.41 120327.35 00:22:17.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 616.87 154.22 214490.50 59288.63 296634.85 00:22:17.198 ======================================================== 00:22:17.198 Total : 2404.51 601.13 109447.27 41104.41 296634.85 00:22:17.198 00:22:17.198 17:59:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:17.455 No valid NVMe controllers or AIO or URING devices found 00:22:17.455 Initializing NVMe Controllers 00:22:17.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.455 Controller IO queue size 128, less than required. 00:22:17.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.455 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:17.455 Controller IO queue size 128, less than required. 00:22:17.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.455 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:17.455 WARNING: Some requested NVMe devices were skipped 00:22:17.455 17:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:19.989 Initializing NVMe Controllers 00:22:19.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.989 Controller IO queue size 128, less than required. 00:22:19.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.989 Controller IO queue size 128, less than required. 00:22:19.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.989 Initialization complete. Launching workers. 00:22:19.989 00:22:19.989 ==================== 00:22:19.989 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:19.989 TCP transport: 00:22:19.989 polls: 42949 00:22:19.989 idle_polls: 29062 00:22:19.989 sock_completions: 13887 00:22:19.989 nvme_completions: 7113 00:22:19.989 submitted_requests: 10718 00:22:19.989 queued_requests: 1 00:22:19.989 00:22:19.989 ==================== 00:22:19.989 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:19.989 TCP transport: 00:22:19.989 polls: 43098 00:22:19.989 idle_polls: 25991 00:22:19.989 sock_completions: 17107 00:22:19.989 nvme_completions: 7147 00:22:19.989 submitted_requests: 10676 00:22:19.989 queued_requests: 1 00:22:19.989 ======================================================== 00:22:19.989 Latency(us) 00:22:19.989 Device Information : IOPS MiB/s Average min max 00:22:19.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1778.00 444.50 72835.42 46750.82 121722.95 00:22:19.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1786.50 446.62 71792.48 30167.96 117251.83 00:22:19.989 ======================================================== 00:22:19.989 Total : 3564.50 891.12 72312.71 30167.96 121722.95 00:22:19.989 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.989 rmmod nvme_tcp 00:22:19.989 rmmod nvme_fabrics 00:22:19.989 rmmod nvme_keyring 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3122043 ']' 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3122043 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3122043 ']' 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3122043 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3122043 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3122043' 00:22:19.989 killing process with pid 3122043 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3122043 00:22:19.989 17:59:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3122043 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.914 17:59:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.449 00:22:24.449 real 0m21.931s 00:22:24.449 user 0m56.584s 00:22:24.449 sys 0m6.741s 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:24.449 ************************************ 00:22:24.449 END TEST nvmf_perf 00:22:24.449 ************************************ 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.449 ************************************ 00:22:24.449 START TEST nvmf_fio_host 00:22:24.449 ************************************ 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:24.449 * Looking for test storage... 00:22:24.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.449 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.450 --rc genhtml_branch_coverage=1 00:22:24.450 --rc genhtml_function_coverage=1 00:22:24.450 --rc genhtml_legend=1 00:22:24.450 --rc geninfo_all_blocks=1 00:22:24.450 --rc geninfo_unexecuted_blocks=1 00:22:24.450 00:22:24.450 ' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.450 --rc genhtml_branch_coverage=1 00:22:24.450 --rc genhtml_function_coverage=1 00:22:24.450 --rc genhtml_legend=1 00:22:24.450 --rc geninfo_all_blocks=1 00:22:24.450 --rc geninfo_unexecuted_blocks=1 00:22:24.450 00:22:24.450 ' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.450 --rc genhtml_branch_coverage=1 00:22:24.450 --rc genhtml_function_coverage=1 00:22:24.450 --rc genhtml_legend=1 00:22:24.450 --rc geninfo_all_blocks=1 00:22:24.450 --rc geninfo_unexecuted_blocks=1 00:22:24.450 00:22:24.450 ' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.450 --rc genhtml_branch_coverage=1 00:22:24.450 --rc genhtml_function_coverage=1 00:22:24.450 --rc genhtml_legend=1 00:22:24.450 --rc geninfo_all_blocks=1 00:22:24.450 --rc geninfo_unexecuted_blocks=1 00:22:24.450 00:22:24.450 ' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.450 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.451 17:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:29.723 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.723 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:29.724 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:29.724 Found net devices under 0000:31:00.0: cvl_0_0 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:29.724 Found net devices under 0000:31:00.1: cvl_0_1 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.724 17:59:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:22:29.724 00:22:29.724 --- 10.0.0.2 ping statistics --- 00:22:29.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.724 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:22:29.724 00:22:29.724 --- 10.0.0.1 ping statistics --- 00:22:29.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.724 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3129447 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3129447 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3129447 ']' 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.724 17:59:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:29.724 [2024-12-06 17:59:17.225006] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:22:29.724 [2024-12-06 17:59:17.225059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.724 [2024-12-06 17:59:17.311914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.724 [2024-12-06 17:59:17.361545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.724 [2024-12-06 17:59:17.361602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.724 [2024-12-06 17:59:17.361611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.724 [2024-12-06 17:59:17.361619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.724 [2024-12-06 17:59:17.361625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.724 [2024-12-06 17:59:17.363751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.724 [2024-12-06 17:59:17.363921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.724 [2024-12-06 17:59:17.364083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.724 [2024-12-06 17:59:17.364084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.293 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.293 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:30.293 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:30.552 [2024-12-06 17:59:18.146789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.552 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:30.552 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.552 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.552 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:30.552 Malloc1 00:22:30.552 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.812 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.072 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.072 [2024-12-06 17:59:18.835885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.072 17:59:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:31.333 17:59:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:31.593 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:31.593 fio-3.35 00:22:31.593 Starting 1 thread 00:22:34.144 00:22:34.144 test: (groupid=0, jobs=1): err= 0: pid=3130292: Fri Dec 6 17:59:21 2024 00:22:34.144 read: IOPS=13.9k, BW=54.5MiB/s (57.1MB/s)(109MiB/2005msec) 00:22:34.144 slat (nsec): min=1412, max=100360, avg=1916.75, stdev=879.18 00:22:34.144 clat (usec): min=1786, max=8678, avg=5055.55, stdev=341.67 00:22:34.144 lat (usec): min=1800, max=8680, avg=5057.47, stdev=341.61 00:22:34.144 clat percentiles (usec): 00:22:34.144 | 1.00th=[ 4293], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:22:34.144 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:22:34.144 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:22:34.144 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6652], 99.95th=[ 7767], 00:22:34.144 | 99.99th=[ 8094] 00:22:34.144 bw ( KiB/s): min=54656, max=56216, per=99.99%, avg=55772.00, stdev=747.35, samples=4 00:22:34.145 iops : min=13664, max=14054, avg=13943.00, stdev=186.84, samples=4 00:22:34.145 write: IOPS=14.0k, BW=54.5MiB/s (57.2MB/s)(109MiB/2005msec); 0 zone resets 00:22:34.145 slat (nsec): min=1445, max=93526, avg=1978.05, stdev=682.15 00:22:34.145 clat (usec): min=986, max=8062, avg=4072.36, stdev=299.97 00:22:34.145 lat (usec): min=993, max=8063, avg=4074.34, stdev=299.94 00:22:34.145 clat percentiles (usec): 00:22:34.145 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:22:34.145 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:22:34.145 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:22:34.145 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 6521], 99.95th=[ 7635], 00:22:34.145 | 99.99th=[ 8029] 00:22:34.145 bw ( KiB/s): min=55144, max=56216, per=100.00%, avg=55856.00, stdev=499.90, samples=4 00:22:34.145 iops : min=13786, max=14054, avg=13964.00, stdev=124.97, samples=4 00:22:34.145 lat (usec) : 1000=0.01% 00:22:34.145 lat (msec) : 2=0.04%, 4=19.61%, 10=80.34% 00:22:34.145 cpu : usr=76.85%, sys=22.11%, ctx=45, majf=0, minf=16 00:22:34.145 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:34.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.145 issued rwts: total=27958,27983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.145 00:22:34.145 Run status group 0 (all jobs): 00:22:34.145 READ: bw=54.5MiB/s (57.1MB/s), 54.5MiB/s-54.5MiB/s (57.1MB/s-57.1MB/s), io=109MiB (115MB), run=2005-2005msec 00:22:34.145 WRITE: bw=54.5MiB/s (57.2MB/s), 54.5MiB/s-54.5MiB/s (57.2MB/s-57.2MB/s), io=109MiB (115MB), run=2005-2005msec 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:34.145 17:59:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.405 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:34.405 fio-3.35 00:22:34.405 Starting 1 thread 00:22:36.935 00:22:36.935 test: (groupid=0, jobs=1): err= 0: pid=3131031: Fri Dec 6 17:59:24 2024 00:22:36.935 read: IOPS=12.4k, BW=193MiB/s (202MB/s)(387MiB/2004msec) 00:22:36.935 slat (nsec): min=2332, max=76267, avg=2455.32, stdev=999.91 00:22:36.935 clat (usec): min=2068, max=12628, avg=6254.99, stdev=1636.28 00:22:36.935 lat (usec): min=2070, max=12630, avg=6257.44, stdev=1636.33 00:22:36.935 clat percentiles (usec): 00:22:36.935 | 1.00th=[ 3261], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4817], 00:22:36.935 | 30.00th=[ 5211], 40.00th=[ 5669], 50.00th=[ 6128], 60.00th=[ 6587], 00:22:36.935 | 70.00th=[ 7046], 80.00th=[ 7701], 90.00th=[ 8356], 95.00th=[ 8979], 00:22:36.935 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12256], 99.95th=[12518], 00:22:36.935 | 99.99th=[12649] 00:22:36.935 bw ( KiB/s): min=95872, max=100384, per=49.32%, avg=97472.00, stdev=2022.68, samples=4 00:22:36.935 iops : min= 5992, max= 6274, avg=6092.00, stdev=126.42, samples=4 00:22:36.935 write: IOPS=7375, BW=115MiB/s (121MB/s)(199MiB/1725msec); 0 zone resets 00:22:36.935 slat (usec): min=27, max=107, avg=27.68, stdev= 1.80 00:22:36.935 clat (usec): min=2373, max=11785, avg=7152.91, stdev=1235.02 00:22:36.935 lat (usec): min=2401, max=11812, avg=7180.59, stdev=1234.96 00:22:36.935 clat percentiles (usec): 00:22:36.935 | 1.00th=[ 4752], 5.00th=[ 5473], 10.00th=[ 5735], 20.00th=[ 6128], 00:22:36.936 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 6980], 60.00th=[ 7308], 00:22:36.936 | 70.00th=[ 7635], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9372], 00:22:36.936 | 99.00th=[10552], 99.50th=[10945], 99.90th=[11207], 99.95th=[11338], 00:22:36.936 | 99.99th=[11731] 00:22:36.936 bw ( KiB/s): min=99712, max=104320, per=85.98%, avg=101464.00, stdev=2064.17, samples=4 00:22:36.936 iops : min= 6232, max= 6520, avg=6341.50, stdev=129.01, samples=4 00:22:36.936 lat (msec) : 4=4.55%, 10=93.29%, 20=2.16% 00:22:36.936 cpu : usr=82.18%, sys=15.88%, ctx=30, majf=0, minf=32 00:22:36.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:36.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:36.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:36.936 issued rwts: total=24755,12723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:36.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:36.936 00:22:36.936 Run status group 0 (all jobs): 00:22:36.936 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=387MiB (406MB), run=2004-2004msec 00:22:36.936 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=199MiB (208MB), run=1725-1725msec 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.936 rmmod nvme_tcp 00:22:36.936 rmmod nvme_fabrics 00:22:36.936 rmmod nvme_keyring 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3129447 ']' 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3129447 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3129447 ']' 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3129447 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129447 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129447' 00:22:36.936 killing process with pid 3129447 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3129447 00:22:36.936 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3129447 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.195 17:59:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.121 17:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.121 00:22:39.121 real 0m15.001s 00:22:39.121 user 0m58.007s 00:22:39.121 sys 0m5.796s 00:22:39.121 17:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.121 17:59:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.121 ************************************ 00:22:39.121 END TEST nvmf_fio_host 00:22:39.121 ************************************ 00:22:39.121 17:59:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:39.121 17:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.122 17:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.122 17:59:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.122 ************************************ 00:22:39.122 START TEST nvmf_failover 00:22:39.122 ************************************ 00:22:39.122 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:39.122 * Looking for test storage... 00:22:39.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.122 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:39.122 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:39.122 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.381 --rc genhtml_branch_coverage=1 00:22:39.381 --rc genhtml_function_coverage=1 00:22:39.381 --rc genhtml_legend=1 00:22:39.381 --rc geninfo_all_blocks=1 00:22:39.381 --rc geninfo_unexecuted_blocks=1 00:22:39.381 00:22:39.381 ' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.381 --rc genhtml_branch_coverage=1 00:22:39.381 --rc genhtml_function_coverage=1 00:22:39.381 --rc genhtml_legend=1 00:22:39.381 --rc geninfo_all_blocks=1 00:22:39.381 --rc geninfo_unexecuted_blocks=1 00:22:39.381 00:22:39.381 ' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.381 --rc genhtml_branch_coverage=1 00:22:39.381 --rc genhtml_function_coverage=1 00:22:39.381 --rc genhtml_legend=1 00:22:39.381 --rc geninfo_all_blocks=1 00:22:39.381 --rc geninfo_unexecuted_blocks=1 00:22:39.381 00:22:39.381 ' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:39.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.381 --rc genhtml_branch_coverage=1 00:22:39.381 --rc genhtml_function_coverage=1 00:22:39.381 --rc genhtml_legend=1 00:22:39.381 --rc geninfo_all_blocks=1 00:22:39.381 --rc geninfo_unexecuted_blocks=1 00:22:39.381 00:22:39.381 ' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.381 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.382 17:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.382 17:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.382 17:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.382 17:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.382 17:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:44.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:44.708 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:44.708 Found net devices under 0000:31:00.0: cvl_0_0 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:44.708 Found net devices under 0000:31:00.1: cvl_0_1 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.708 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.709 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.709 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.709 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.709 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.709 17:59:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:22:44.709 00:22:44.709 --- 10.0.0.2 ping statistics --- 00:22:44.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.709 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:22:44.709 00:22:44.709 --- 10.0.0.1 ping statistics --- 00:22:44.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.709 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3135792 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3135792 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3135792 ']' 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:44.709 [2024-12-06 17:59:32.201990] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:22:44.709 [2024-12-06 17:59:32.202039] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.709 [2024-12-06 17:59:32.272682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:44.709 [2024-12-06 17:59:32.301806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.709 [2024-12-06 17:59:32.301835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.709 [2024-12-06 17:59:32.301842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.709 [2024-12-06 17:59:32.301847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.709 [2024-12-06 17:59:32.301851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.709 [2024-12-06 17:59:32.303150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.709 [2024-12-06 17:59:32.303304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.709 [2024-12-06 17:59:32.303306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.709 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:45.002 [2024-12-06 17:59:32.542759] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.002 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:45.002 Malloc0 00:22:45.002 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.314 17:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.314 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.605 [2024-12-06 17:59:33.195896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.605 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.605 [2024-12-06 17:59:33.352338] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.605 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:45.865 [2024-12-06 17:59:33.512765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3136159 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3136159 /var/tmp/bdevperf.sock 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3136159 ']' 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:45.865 17:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.803 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:46.803 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:46.803 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.803 NVMe0n1 00:22:46.803 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:47.375 00:22:47.375 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3136491 00:22:47.375 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.375 17:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:48.312 17:59:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.312 [2024-12-06 17:59:36.101699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.101995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.102000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 [2024-12-06 17:59:36.102004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6f510 is same with the state(6) to be set 00:22:48.312 17:59:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:51.598 17:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:51.857 00:22:51.857 17:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:51.857 [2024-12-06 17:59:39.635029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.857 [2024-12-06 17:59:39.635156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 [2024-12-06 17:59:39.635321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd6ffc0 is same with the state(6) to be set 00:22:51.858 17:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:55.145 17:59:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.146 [2024-12-06 17:59:42.797997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.146 17:59:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:56.105 17:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:56.365 [2024-12-06 17:59:43.965114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.365 [2024-12-06 17:59:43.965147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.365 [2024-12-06 17:59:43.965153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.365 [2024-12-06 17:59:43.965158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 [2024-12-06 17:59:43.965388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc35620 is same with the state(6) to be set 00:22:56.366 17:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3136491 00:23:02.944 { 00:23:02.944 "results": [ 00:23:02.944 { 00:23:02.944 "job": "NVMe0n1", 00:23:02.944 "core_mask": "0x1", 00:23:02.944 "workload": "verify", 00:23:02.944 "status": "finished", 00:23:02.944 "verify_range": { 00:23:02.944 "start": 0, 00:23:02.944 "length": 16384 00:23:02.944 }, 00:23:02.944 "queue_depth": 128, 00:23:02.944 "io_size": 4096, 00:23:02.944 "runtime": 15.045314, 00:23:02.944 "iops": 12695.049102996454, 00:23:02.944 "mibps": 49.5900355585799, 00:23:02.944 "io_failed": 9373, 00:23:02.944 "io_timeout": 0, 00:23:02.944 "avg_latency_us": 9565.37608112829, 00:23:02.944 "min_latency_us": 532.48, 00:23:02.944 "max_latency_us": 44127.573333333334 00:23:02.944 } 00:23:02.944 ], 00:23:02.944 "core_count": 1 00:23:02.944 } 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3136159 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3136159 ']' 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3136159 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3136159 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3136159' 00:23:02.944 killing process with pid 3136159 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3136159 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3136159 00:23:02.944 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:02.944 [2024-12-06 17:59:33.564619] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:23:02.944 [2024-12-06 17:59:33.564677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3136159 ] 00:23:02.944 [2024-12-06 17:59:33.642879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.944 [2024-12-06 17:59:33.678727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.944 Running I/O for 15 seconds... 00:23:02.944 11589.00 IOPS, 45.27 MiB/s [2024-12-06T16:59:50.771Z] [2024-12-06 17:59:36.102818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.944 [2024-12-06 17:59:36.102851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.102989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.102997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.944 [2024-12-06 17:59:36.103234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.944 [2024-12-06 17:59:36.103241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.945 [2024-12-06 17:59:36.103851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.945 [2024-12-06 17:59:36.103858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.103985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.103992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.946 [2024-12-06 17:59:36.104418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.946 [2024-12-06 17:59:36.104425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.947 [2024-12-06 17:59:36.104928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.947 [2024-12-06 17:59:36.104959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:23:02.947 [2024-12-06 17:59:36.104967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.947 [2024-12-06 17:59:36.104978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.947 [2024-12-06 17:59:36.104984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.947 [2024-12-06 17:59:36.104990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:23:02.948 [2024-12-06 17:59:36.104997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.948 [2024-12-06 17:59:36.105011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.948 [2024-12-06 17:59:36.105019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:23:02.948 [2024-12-06 17:59:36.105026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.948 [2024-12-06 17:59:36.105039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.948 [2024-12-06 17:59:36.105045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:23:02.948 [2024-12-06 17:59:36.105053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105091] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:02.948 [2024-12-06 17:59:36.105116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.948 [2024-12-06 17:59:36.105125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.948 [2024-12-06 17:59:36.105141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.948 [2024-12-06 17:59:36.105156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.948 [2024-12-06 17:59:36.105171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:36.105179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:02.948 [2024-12-06 17:59:36.105207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2087930 (9): Bad file descriptor 00:23:02.948 [2024-12-06 17:59:36.108723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:02.948 [2024-12-06 17:59:36.262838] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:02.948 11159.00 IOPS, 43.59 MiB/s [2024-12-06T16:59:50.775Z] 11790.33 IOPS, 46.06 MiB/s [2024-12-06T16:59:50.775Z] 12167.25 IOPS, 47.53 MiB/s [2024-12-06T16:59:50.775Z] [2024-12-06 17:59:39.636352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.948 [2024-12-06 17:59:39.636625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.948 [2024-12-06 17:59:39.636632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.949 [2024-12-06 17:59:39.636960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.949 [2024-12-06 17:59:39.636965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.636971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.950 [2024-12-06 17:59:39.636977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.636984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.950 [2024-12-06 17:59:39.636989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.636996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.950 [2024-12-06 17:59:39.637001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.950 [2024-12-06 17:59:39.637012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.950 [2024-12-06 17:59:39.637023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.950 [2024-12-06 17:59:39.637035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.950 [2024-12-06 17:59:39.637330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.950 [2024-12-06 17:59:39.637336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.951 [2024-12-06 17:59:39.637707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.951 [2024-12-06 17:59:39.637715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.952 [2024-12-06 17:59:39.637868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.952 [2024-12-06 17:59:39.637889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.952 [2024-12-06 17:59:39.637893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106104 len:8 PRP1 0x0 PRP2 0x0 00:23:02.952 [2024-12-06 17:59:39.637899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637931] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:02.952 [2024-12-06 17:59:39.637948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.952 [2024-12-06 17:59:39.637954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.952 [2024-12-06 17:59:39.637965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.952 [2024-12-06 17:59:39.637976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.952 [2024-12-06 17:59:39.637987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:39.637992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:02.952 [2024-12-06 17:59:39.640447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:02.952 [2024-12-06 17:59:39.640467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2087930 (9): Bad file descriptor 00:23:02.952 [2024-12-06 17:59:39.668095] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:02.952 12231.80 IOPS, 47.78 MiB/s [2024-12-06T16:59:50.779Z] 12351.83 IOPS, 48.25 MiB/s [2024-12-06T16:59:50.779Z] 12446.14 IOPS, 48.62 MiB/s [2024-12-06T16:59:50.779Z] 12505.88 IOPS, 48.85 MiB/s [2024-12-06T16:59:50.779Z] [2024-12-06 17:59:43.965633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.952 [2024-12-06 17:59:43.965786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.952 [2024-12-06 17:59:43.965791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.965989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.965995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.953 [2024-12-06 17:59:43.966201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.953 [2024-12-06 17:59:43.966206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.954 [2024-12-06 17:59:43.966300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.954 [2024-12-06 17:59:43.966540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.954 [2024-12-06 17:59:43.966545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:40880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.955 [2024-12-06 17:59:43.966928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.955 [2024-12-06 17:59:43.966935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.966940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.966946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.966951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.966958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.966963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.966974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.966981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.966986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.966992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.966997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:02.956 [2024-12-06 17:59:43.967150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:02.956 [2024-12-06 17:59:43.967173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:02.956 [2024-12-06 17:59:43.967177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41040 len:8 PRP1 0x0 PRP2 0x0 00:23:02.956 [2024-12-06 17:59:43.967183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967218] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:02.956 [2024-12-06 17:59:43.967235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.956 [2024-12-06 17:59:43.967243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.956 [2024-12-06 17:59:43.967255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.956 [2024-12-06 17:59:43.967267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.956 [2024-12-06 17:59:43.967279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.956 [2024-12-06 17:59:43.967285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:02.956 [2024-12-06 17:59:43.969737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:02.956 [2024-12-06 17:59:43.969757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2087930 (9): Bad file descriptor 00:23:02.956 [2024-12-06 17:59:43.997277] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:02.956 12518.78 IOPS, 48.90 MiB/s [2024-12-06T16:59:50.783Z] 12588.70 IOPS, 49.17 MiB/s [2024-12-06T16:59:50.783Z] 12630.45 IOPS, 49.34 MiB/s [2024-12-06T16:59:50.783Z] 12657.42 IOPS, 49.44 MiB/s [2024-12-06T16:59:50.783Z] 12694.85 IOPS, 49.59 MiB/s [2024-12-06T16:59:50.783Z] 12714.79 IOPS, 49.67 MiB/s [2024-12-06T16:59:50.783Z] 12733.13 IOPS, 49.74 MiB/s 00:23:02.956 Latency(us) 00:23:02.956 [2024-12-06T16:59:50.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.956 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:02.956 Verification LBA range: start 0x0 length 0x4000 00:23:02.956 NVMe0n1 : 15.05 12695.05 49.59 622.98 0.00 9565.38 532.48 44127.57 00:23:02.956 [2024-12-06T16:59:50.783Z] =================================================================================================================== 00:23:02.956 [2024-12-06T16:59:50.783Z] Total : 12695.05 49.59 622.98 0.00 9565.38 532.48 44127.57 00:23:02.956 Received shutdown signal, test time was about 15.000000 seconds 00:23:02.956 00:23:02.956 Latency(us) 00:23:02.956 [2024-12-06T16:59:50.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.956 [2024-12-06T16:59:50.783Z] =================================================================================================================== 00:23:02.956 [2024-12-06T16:59:50.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3139824 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3139824 /var/tmp/bdevperf.sock 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3139824 ']' 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:02.956 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:02.956 [2024-12-06 17:59:50.620941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:02.957 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:03.217 [2024-12-06 17:59:50.781312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:03.217 17:59:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.476 NVMe0n1 00:23:03.476 17:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.736 00:23:03.736 17:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:04.303 00:23:04.303 17:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.303 17:59:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:04.303 17:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.561 17:59:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:07.851 17:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.851 17:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:07.851 17:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3140837 00:23:07.851 17:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:07.851 17:59:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3140837 00:23:08.790 { 00:23:08.790 "results": [ 00:23:08.790 { 00:23:08.790 "job": "NVMe0n1", 00:23:08.790 "core_mask": "0x1", 00:23:08.790 "workload": "verify", 00:23:08.790 "status": "finished", 00:23:08.790 "verify_range": { 00:23:08.790 "start": 0, 00:23:08.790 "length": 16384 00:23:08.790 }, 00:23:08.790 "queue_depth": 128, 00:23:08.790 "io_size": 4096, 00:23:08.790 "runtime": 1.004546, 00:23:08.790 "iops": 13029.766680669676, 00:23:08.790 "mibps": 50.89752609636592, 00:23:08.790 "io_failed": 0, 00:23:08.790 "io_timeout": 0, 00:23:08.790 "avg_latency_us": 9786.918399674027, 00:23:08.790 "min_latency_us": 836.2666666666667, 00:23:08.790 "max_latency_us": 10922.666666666666 00:23:08.790 } 00:23:08.790 ], 00:23:08.790 "core_count": 1 00:23:08.790 } 00:23:08.790 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.790 [2024-12-06 17:59:50.316755] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:23:08.790 [2024-12-06 17:59:50.316814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139824 ] 00:23:08.790 [2024-12-06 17:59:50.381479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.790 [2024-12-06 17:59:50.409613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.790 [2024-12-06 17:59:52.177601] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:08.790 [2024-12-06 17:59:52.177636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.790 [2024-12-06 17:59:52.177645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.790 [2024-12-06 17:59:52.177651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.790 [2024-12-06 17:59:52.177657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.790 [2024-12-06 17:59:52.177663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.790 [2024-12-06 17:59:52.177668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.790 [2024-12-06 17:59:52.177674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:08.790 [2024-12-06 17:59:52.177679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.790 [2024-12-06 17:59:52.177684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:08.790 [2024-12-06 17:59:52.177706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:08.790 [2024-12-06 17:59:52.177717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9930 (9): Bad file descriptor 00:23:08.790 [2024-12-06 17:59:52.322176] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:08.790 Running I/O for 1 seconds... 00:23:08.790 12961.00 IOPS, 50.63 MiB/s 00:23:08.790 Latency(us) 00:23:08.790 [2024-12-06T16:59:56.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.790 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:08.790 Verification LBA range: start 0x0 length 0x4000 00:23:08.790 NVMe0n1 : 1.00 13029.77 50.90 0.00 0.00 9786.92 836.27 10922.67 00:23:08.790 [2024-12-06T16:59:56.617Z] =================================================================================================================== 00:23:08.790 [2024-12-06T16:59:56.617Z] Total : 13029.77 50.90 0.00 0.00 9786.92 836.27 10922.67 00:23:08.790 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.790 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:08.790 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:09.050 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:09.050 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:09.310 17:59:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:09.310 17:59:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3139824 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3139824 ']' 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3139824 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139824 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139824' 00:23:12.605 killing process with pid 3139824 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3139824 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3139824 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:12.605 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:12.869 rmmod nvme_tcp 00:23:12.869 rmmod nvme_fabrics 00:23:12.869 rmmod nvme_keyring 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3135792 ']' 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3135792 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3135792 ']' 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3135792 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135792 00:23:12.869 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:12.870 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:12.870 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135792' 00:23:12.870 killing process with pid 3135792 00:23:12.870 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3135792 00:23:12.870 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3135792 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.131 18:00:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.035 18:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:15.035 00:23:15.035 real 0m35.997s 00:23:15.035 user 1m56.044s 00:23:15.035 sys 0m6.559s 00:23:15.035 18:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:15.035 18:00:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.035 ************************************ 00:23:15.035 END TEST nvmf_failover 00:23:15.035 ************************************ 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.295 ************************************ 00:23:15.295 START TEST nvmf_host_discovery 00:23:15.295 ************************************ 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:15.295 * Looking for test storage... 00:23:15.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:15.295 18:00:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.295 --rc genhtml_branch_coverage=1 00:23:15.295 --rc genhtml_function_coverage=1 00:23:15.295 --rc genhtml_legend=1 00:23:15.295 --rc geninfo_all_blocks=1 00:23:15.295 --rc geninfo_unexecuted_blocks=1 00:23:15.295 00:23:15.295 ' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.295 --rc genhtml_branch_coverage=1 00:23:15.295 --rc genhtml_function_coverage=1 00:23:15.295 --rc genhtml_legend=1 00:23:15.295 --rc geninfo_all_blocks=1 00:23:15.295 --rc geninfo_unexecuted_blocks=1 00:23:15.295 00:23:15.295 ' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.295 --rc genhtml_branch_coverage=1 00:23:15.295 --rc genhtml_function_coverage=1 00:23:15.295 --rc genhtml_legend=1 00:23:15.295 --rc geninfo_all_blocks=1 00:23:15.295 --rc geninfo_unexecuted_blocks=1 00:23:15.295 00:23:15.295 ' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.295 --rc genhtml_branch_coverage=1 00:23:15.295 --rc genhtml_function_coverage=1 00:23:15.295 --rc genhtml_legend=1 00:23:15.295 --rc geninfo_all_blocks=1 00:23:15.295 --rc geninfo_unexecuted_blocks=1 00:23:15.295 00:23:15.295 ' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.295 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:15.296 18:00:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.573 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.573 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.573 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.573 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.573 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:20.574 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:20.574 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:20.574 Found net devices under 0000:31:00.0: cvl_0_0 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:20.574 Found net devices under 0000:31:00.1: cvl_0_1 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.574 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:23:20.833 00:23:20.833 --- 10.0.0.2 ping statistics --- 00:23:20.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.833 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:23:20.833 00:23:20.833 --- 10.0.0.1 ping statistics --- 00:23:20.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.833 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.833 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3146733 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3146733 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3146733 ']' 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.834 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.834 [2024-12-06 18:00:08.642905] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:23:20.834 [2024-12-06 18:00:08.642955] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.093 [2024-12-06 18:00:08.716404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.093 [2024-12-06 18:00:08.746050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.093 [2024-12-06 18:00:08.746080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.093 [2024-12-06 18:00:08.746087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.093 [2024-12-06 18:00:08.746092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.093 [2024-12-06 18:00:08.746096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.093 [2024-12-06 18:00:08.746584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 [2024-12-06 18:00:08.849555] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 [2024-12-06 18:00:08.857726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 null0 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 null1 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3146846 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3146846 /tmp/host.sock 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3146846 ']' 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:21.093 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.093 18:00:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:21.352 [2024-12-06 18:00:08.921903] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:23:21.352 [2024-12-06 18:00:08.921950] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146846 ] 00:23:21.352 [2024-12-06 18:00:09.001953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.352 [2024-12-06 18:00:09.038146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:21.921 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 [2024-12-06 18:00:09.928389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.182 18:00:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:22.441 18:00:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:23.009 [2024-12-06 18:00:10.758071] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:23.009 [2024-12-06 18:00:10.758092] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:23.009 [2024-12-06 18:00:10.758109] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.268 [2024-12-06 18:00:10.886511] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:23.268 [2024-12-06 18:00:10.946271] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:23.268 [2024-12-06 18:00:10.947245] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xecc190:1 started. 00:23:23.268 [2024-12-06 18:00:10.948859] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:23.268 [2024-12-06 18:00:10.948877] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.268 [2024-12-06 18:00:10.956772] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xecc190 was disconnected and freed. delete nvme_qpair. 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:23.268 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.529 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.530 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.790 [2024-12-06 18:00:11.459066] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xecc510:1 started. 00:23:23.790 [2024-12-06 18:00:11.467975] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xecc510 was disconnected and freed. delete nvme_qpair. 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.790 [2024-12-06 18:00:11.512686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.790 [2024-12-06 18:00:11.513197] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.790 [2024-12-06 18:00:11.513216] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:23.790 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:23.791 18:00:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:24.049 [2024-12-06 18:00:11.642901] bdev_nvme.c:7434:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:24.049 [2024-12-06 18:00:11.743828] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:24.049 [2024-12-06 18:00:11.743860] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:24.049 [2024-12-06 18:00:11.743866] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:24.049 [2024-12-06 18:00:11.743870] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.987 [2024-12-06 18:00:12.684083] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.987 [2024-12-06 18:00:12.684103] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:24.987 [2024-12-06 18:00:12.692959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.987 [2024-12-06 18:00:12.692975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.987 [2024-12-06 18:00:12.692982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.987 [2024-12-06 18:00:12.692988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.987 [2024-12-06 18:00:12.692993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.987 [2024-12-06 18:00:12.692999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.987 [2024-12-06 18:00:12.693006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.987 [2024-12-06 18:00:12.693012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.987 [2024-12-06 18:00:12.693022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.987 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.987 [2024-12-06 18:00:12.702974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.987 [2024-12-06 18:00:12.713007] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:24.987 [2024-12-06 18:00:12.713015] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:24.987 [2024-12-06 18:00:12.713021] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:24.987 [2024-12-06 18:00:12.713025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.987 [2024-12-06 18:00:12.713039] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:24.987 [2024-12-06 18:00:12.713491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.987 [2024-12-06 18:00:12.713522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9c7d0 with addr=10.0.0.2, port=4420 00:23:24.987 [2024-12-06 18:00:12.713531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.987 [2024-12-06 18:00:12.713545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.987 [2024-12-06 18:00:12.713563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:24.987 [2024-12-06 18:00:12.713569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:24.987 [2024-12-06 18:00:12.713576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:24.987 [2024-12-06 18:00:12.713581] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:24.987 [2024-12-06 18:00:12.713585] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:24.988 [2024-12-06 18:00:12.713589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:24.988 [2024-12-06 18:00:12.723069] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:24.988 [2024-12-06 18:00:12.723084] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:24.988 [2024-12-06 18:00:12.723087] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.723091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.988 [2024-12-06 18:00:12.723107] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.723520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.988 [2024-12-06 18:00:12.723550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9c7d0 with addr=10.0.0.2, port=4420 00:23:24.988 [2024-12-06 18:00:12.723559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.988 [2024-12-06 18:00:12.723573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.988 [2024-12-06 18:00:12.723593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:24.988 [2024-12-06 18:00:12.723598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:24.988 [2024-12-06 18:00:12.723604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:24.988 [2024-12-06 18:00:12.723609] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:24.988 [2024-12-06 18:00:12.723613] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:24.988 [2024-12-06 18:00:12.723616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.988 [2024-12-06 18:00:12.733136] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:24.988 [2024-12-06 18:00:12.733150] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:24.988 [2024-12-06 18:00:12.733154] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.733158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.988 [2024-12-06 18:00:12.733170] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.733470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.988 [2024-12-06 18:00:12.733481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9c7d0 with addr=10.0.0.2, port=4420 00:23:24.988 [2024-12-06 18:00:12.733486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.988 [2024-12-06 18:00:12.733499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.988 [2024-12-06 18:00:12.733507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:24.988 [2024-12-06 18:00:12.733511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:24.988 [2024-12-06 18:00:12.733520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:24.988 [2024-12-06 18:00:12.733525] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:24.988 [2024-12-06 18:00:12.733529] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:24.988 [2024-12-06 18:00:12.733532] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:24.988 [2024-12-06 18:00:12.743199] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:24.988 [2024-12-06 18:00:12.743208] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:24.988 [2024-12-06 18:00:12.743211] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.743214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.988 [2024-12-06 18:00:12.743224] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.743546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.988 [2024-12-06 18:00:12.743554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9c7d0 with addr=10.0.0.2, port=4420 00:23:24.988 [2024-12-06 18:00:12.743560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.988 [2024-12-06 18:00:12.743567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.988 [2024-12-06 18:00:12.743575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:24.988 [2024-12-06 18:00:12.743579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:24.988 [2024-12-06 18:00:12.743584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:24.988 [2024-12-06 18:00:12.743588] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:24.988 [2024-12-06 18:00:12.743591] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:24.988 [2024-12-06 18:00:12.743594] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:24.988 [2024-12-06 18:00:12.753252] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:24.988 [2024-12-06 18:00:12.753263] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:24.988 [2024-12-06 18:00:12.753266] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:24.988 [2024-12-06 18:00:12.753269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.988 [2024-12-06 18:00:12.753280] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:24.988 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:24.988 [2024-12-06 18:00:12.753570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.988 [2024-12-06 18:00:12.753585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9c7d0 with addr=10.0.0.2, port=4420 00:23:24.988 [2024-12-06 18:00:12.753591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.988 [2024-12-06 18:00:12.753600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.988 [2024-12-06 18:00:12.753608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:24.988 [2024-12-06 18:00:12.753613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] co 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:24.988 ntroller reinitialization failed 00:23:24.988 [2024-12-06 18:00:12.753621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:24.988 [2024-12-06 18:00:12.753625] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:24.989 [2024-12-06 18:00:12.753629] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:24.989 [2024-12-06 18:00:12.753632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:24.989 [2024-12-06 18:00:12.763309] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:24.989 [2024-12-06 18:00:12.763318] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:24.989 [2024-12-06 18:00:12.763322] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:24.989 [2024-12-06 18:00:12.763325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:24.989 [2024-12-06 18:00:12.763335] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:24.989 [2024-12-06 18:00:12.763630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.989 [2024-12-06 18:00:12.763638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9c7d0 with addr=10.0.0.2, port=4420 00:23:24.989 [2024-12-06 18:00:12.763643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9c7d0 is same with the state(6) to be set 00:23:24.989 [2024-12-06 18:00:12.763651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9c7d0 (9): Bad file descriptor 00:23:24.989 [2024-12-06 18:00:12.763658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:24.989 [2024-12-06 18:00:12.763662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:24.989 [2024-12-06 18:00:12.763668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:24.989 [2024-12-06 18:00:12.763675] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:24.989 [2024-12-06 18:00:12.763678] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:24.989 [2024-12-06 18:00:12.763681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.989 [2024-12-06 18:00:12.773318] bdev_nvme.c:7297:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:24.989 [2024-12-06 18:00:12.773331] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:24.989 18:00:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.374 18:00:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.312 [2024-12-06 18:00:15.020240] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:27.312 [2024-12-06 18:00:15.020255] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:27.312 [2024-12-06 18:00:15.020264] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.312 [2024-12-06 18:00:15.106493] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:27.572 [2024-12-06 18:00:15.171147] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:27.572 [2024-12-06 18:00:15.171775] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1001f00:1 started. 00:23:27.572 [2024-12-06 18:00:15.173138] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:27.572 [2024-12-06 18:00:15.173159] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:27.572 [2024-12-06 18:00:15.177338] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1001f00 was disconnected and freed. delete nvme_qpair. 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.572 request: 00:23:27.572 { 00:23:27.572 "name": "nvme", 00:23:27.572 "trtype": "tcp", 00:23:27.572 "traddr": "10.0.0.2", 00:23:27.572 "adrfam": "ipv4", 00:23:27.572 "trsvcid": "8009", 00:23:27.572 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:27.572 "wait_for_attach": true, 00:23:27.572 "method": "bdev_nvme_start_discovery", 00:23:27.572 "req_id": 1 00:23:27.572 } 00:23:27.572 Got JSON-RPC error response 00:23:27.572 response: 00:23:27.572 { 00:23:27.572 "code": -17, 00:23:27.572 "message": "File exists" 00:23:27.572 } 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.572 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.573 request: 00:23:27.573 { 00:23:27.573 "name": "nvme_second", 00:23:27.573 "trtype": "tcp", 00:23:27.573 "traddr": "10.0.0.2", 00:23:27.573 "adrfam": "ipv4", 00:23:27.573 "trsvcid": "8009", 00:23:27.573 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:27.573 "wait_for_attach": true, 00:23:27.573 "method": "bdev_nvme_start_discovery", 00:23:27.573 "req_id": 1 00:23:27.573 } 00:23:27.573 Got JSON-RPC error response 00:23:27.573 response: 00:23:27.573 { 00:23:27.573 "code": -17, 00:23:27.573 "message": "File exists" 00:23:27.573 } 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.573 18:00:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.949 [2024-12-06 18:00:16.340427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.949 [2024-12-06 18:00:16.340451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1002cb0 with addr=10.0.0.2, port=8010 00:23:28.949 [2024-12-06 18:00:16.340462] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:28.949 [2024-12-06 18:00:16.340467] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:28.949 [2024-12-06 18:00:16.340472] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:29.516 [2024-12-06 18:00:17.342756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.517 [2024-12-06 18:00:17.342774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1002cb0 with addr=10.0.0.2, port=8010 00:23:29.517 [2024-12-06 18:00:17.342783] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:29.517 [2024-12-06 18:00:17.342787] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:29.517 [2024-12-06 18:00:17.342792] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:30.897 [2024-12-06 18:00:18.344797] bdev_nvme.c:7553:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:30.897 request: 00:23:30.897 { 00:23:30.897 "name": "nvme_second", 00:23:30.897 "trtype": "tcp", 00:23:30.897 "traddr": "10.0.0.2", 00:23:30.897 "adrfam": "ipv4", 00:23:30.897 "trsvcid": "8010", 00:23:30.897 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:30.897 "wait_for_attach": false, 00:23:30.897 "attach_timeout_ms": 3000, 00:23:30.897 "method": "bdev_nvme_start_discovery", 00:23:30.897 "req_id": 1 00:23:30.897 } 00:23:30.897 Got JSON-RPC error response 00:23:30.897 response: 00:23:30.897 { 00:23:30.897 "code": -110, 00:23:30.897 "message": "Connection timed out" 00:23:30.897 } 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3146846 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.897 rmmod nvme_tcp 00:23:30.897 rmmod nvme_fabrics 00:23:30.897 rmmod nvme_keyring 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3146733 ']' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3146733 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3146733 ']' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3146733 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146733 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146733' 00:23:30.897 killing process with pid 3146733 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3146733 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3146733 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.897 18:00:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:33.438 00:23:33.438 real 0m17.740s 00:23:33.438 user 0m21.756s 00:23:33.438 sys 0m5.445s 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.438 ************************************ 00:23:33.438 END TEST nvmf_host_discovery 00:23:33.438 ************************************ 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.438 ************************************ 00:23:33.438 START TEST nvmf_host_multipath_status 00:23:33.438 ************************************ 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:33.438 * Looking for test storage... 00:23:33.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.438 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:33.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.438 --rc genhtml_branch_coverage=1 00:23:33.438 --rc genhtml_function_coverage=1 00:23:33.438 --rc genhtml_legend=1 00:23:33.439 --rc geninfo_all_blocks=1 00:23:33.439 --rc geninfo_unexecuted_blocks=1 00:23:33.439 00:23:33.439 ' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:33.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.439 --rc genhtml_branch_coverage=1 00:23:33.439 --rc genhtml_function_coverage=1 00:23:33.439 --rc genhtml_legend=1 00:23:33.439 --rc geninfo_all_blocks=1 00:23:33.439 --rc geninfo_unexecuted_blocks=1 00:23:33.439 00:23:33.439 ' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:33.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.439 --rc genhtml_branch_coverage=1 00:23:33.439 --rc genhtml_function_coverage=1 00:23:33.439 --rc genhtml_legend=1 00:23:33.439 --rc geninfo_all_blocks=1 00:23:33.439 --rc geninfo_unexecuted_blocks=1 00:23:33.439 00:23:33.439 ' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:33.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.439 --rc genhtml_branch_coverage=1 00:23:33.439 --rc genhtml_function_coverage=1 00:23:33.439 --rc genhtml_legend=1 00:23:33.439 --rc geninfo_all_blocks=1 00:23:33.439 --rc geninfo_unexecuted_blocks=1 00:23:33.439 00:23:33.439 ' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.439 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.440 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.440 18:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.715 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:38.716 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:38.716 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:38.716 Found net devices under 0000:31:00.0: cvl_0_0 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:38.716 Found net devices under 0000:31:00.1: cvl_0_1 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.716 18:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:23:38.716 00:23:38.716 --- 10.0.0.2 ping statistics --- 00:23:38.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.716 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:38.716 00:23:38.716 --- 10.0.0.1 ping statistics --- 00:23:38.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.716 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.716 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3153661 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3153661 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3153661 ']' 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.717 18:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:38.717 [2024-12-06 18:00:26.209596] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:23:38.717 [2024-12-06 18:00:26.209647] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.717 [2024-12-06 18:00:26.296638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:38.717 [2024-12-06 18:00:26.343254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.717 [2024-12-06 18:00:26.343301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.717 [2024-12-06 18:00:26.343310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.717 [2024-12-06 18:00:26.343318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.717 [2024-12-06 18:00:26.343324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.717 [2024-12-06 18:00:26.344850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.717 [2024-12-06 18:00:26.344857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3153661 00:23:39.336 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:39.625 [2024-12-06 18:00:27.185969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.625 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:39.625 Malloc0 00:23:39.625 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:39.934 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.934 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.194 [2024-12-06 18:00:27.848972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.194 18:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:40.194 [2024-12-06 18:00:28.009372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3154220 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3154220 /var/tmp/bdevperf.sock 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3154220 ']' 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:40.455 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:40.715 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:40.974 Nvme0n1 00:23:41.232 18:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:41.491 Nvme0n1 00:23:41.491 18:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:41.491 18:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:44.040 18:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:44.040 18:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:44.040 18:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:44.040 18:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.979 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.238 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.238 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.238 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.238 18:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.238 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.238 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.238 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.238 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.498 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.498 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.498 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.498 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:45.757 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.016 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.275 18:00:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:47.214 18:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:47.214 18:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.214 18:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.214 18:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.214 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.214 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:47.214 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.214 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.490 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.490 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.490 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.490 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.752 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.011 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.011 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:48.011 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.011 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.270 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.270 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:48.270 18:00:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:48.270 18:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:48.530 18:00:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:49.467 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:49.467 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:49.467 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.467 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.725 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.984 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.984 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.984 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.984 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.245 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.245 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:50.245 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.245 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.245 18:00:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.245 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.245 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.245 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.504 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.504 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:50.504 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:50.504 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:50.762 18:00:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:51.700 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:51.700 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:51.700 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:51.700 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.960 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.960 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:51.960 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.960 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.220 18:00:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.480 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.480 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.480 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.480 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:52.740 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:53.000 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:53.000 18:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.376 18:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.376 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.376 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.376 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.376 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.635 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.893 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.893 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:54.893 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.893 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.151 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.151 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:55.151 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:55.151 18:00:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.410 18:00:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:56.343 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:56.343 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:56.343 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.343 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.602 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.862 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.862 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.862 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.862 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.121 18:00:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.378 18:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.378 18:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:57.636 18:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:57.637 18:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:57.637 18:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.895 18:00:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:58.833 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:58.833 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:58.833 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.834 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.092 18:00:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.351 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.351 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.351 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.351 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:59.610 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.870 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.870 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:59.870 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.130 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.130 18:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:01.067 18:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:01.067 18:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:01.067 18:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.067 18:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.327 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.327 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:01.327 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.327 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.587 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:01.847 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.847 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:01.847 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.847 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:02.107 18:00:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:02.367 18:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:02.367 18:00:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.747 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.007 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.007 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.007 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.007 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.265 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.265 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.265 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.265 18:00:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.265 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.265 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:04.266 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.266 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.526 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.526 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:04.526 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.526 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:04.785 18:00:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:05.723 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:05.723 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.723 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.723 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.983 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.983 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:05.983 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.983 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.242 18:00:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.242 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.502 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3154220 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3154220 ']' 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3154220 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154220 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:06.762 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:06.763 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154220' 00:24:06.763 killing process with pid 3154220 00:24:06.763 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3154220 00:24:06.763 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3154220 00:24:06.763 { 00:24:06.763 "results": [ 00:24:06.763 { 00:24:06.763 "job": "Nvme0n1", 00:24:06.763 "core_mask": "0x4", 00:24:06.763 "workload": "verify", 00:24:06.763 "status": "terminated", 00:24:06.763 "verify_range": { 00:24:06.763 "start": 0, 00:24:06.763 "length": 16384 00:24:06.763 }, 00:24:06.763 "queue_depth": 128, 00:24:06.763 "io_size": 4096, 00:24:06.763 "runtime": 25.180391, 00:24:06.763 "iops": 12130.034041171164, 00:24:06.763 "mibps": 47.38294547332486, 00:24:06.763 "io_failed": 0, 00:24:06.763 "io_timeout": 0, 00:24:06.763 "avg_latency_us": 10532.851612880695, 00:24:06.763 "min_latency_us": 401.06666666666666, 00:24:06.763 "max_latency_us": 3019898.88 00:24:06.763 } 00:24:06.763 ], 00:24:06.763 "core_count": 1 00:24:06.763 } 00:24:07.028 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3154220 00:24:07.028 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.028 [2024-12-06 18:00:28.051326] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:24:07.028 [2024-12-06 18:00:28.051374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154220 ] 00:24:07.028 [2024-12-06 18:00:28.121226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.028 [2024-12-06 18:00:28.157610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.028 Running I/O for 90 seconds... 00:24:07.028 11179.00 IOPS, 43.67 MiB/s [2024-12-06T17:00:54.855Z] 12114.50 IOPS, 47.32 MiB/s [2024-12-06T17:00:54.855Z] 12394.67 IOPS, 48.42 MiB/s [2024-12-06T17:00:54.855Z] 12578.50 IOPS, 49.13 MiB/s [2024-12-06T17:00:54.855Z] 12695.20 IOPS, 49.59 MiB/s [2024-12-06T17:00:54.855Z] 12732.83 IOPS, 49.74 MiB/s [2024-12-06T17:00:54.855Z] 12774.29 IOPS, 49.90 MiB/s [2024-12-06T17:00:54.855Z] 12831.50 IOPS, 50.12 MiB/s [2024-12-06T17:00:54.855Z] 12849.78 IOPS, 50.19 MiB/s [2024-12-06T17:00:54.855Z] 12881.40 IOPS, 50.32 MiB/s [2024-12-06T17:00:54.855Z] 12887.82 IOPS, 50.34 MiB/s [2024-12-06T17:00:54.855Z] [2024-12-06 18:00:40.628136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.028 [2024-12-06 18:00:40.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:07.028 [2024-12-06 18:00:40.628447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.628862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.029 [2024-12-06 18:00:40.628986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.628998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:07.029 [2024-12-06 18:00:40.629178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.029 [2024-12-06 18:00:40.629183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.030 [2024-12-06 18:00:40.629948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.629982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.629988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.030 [2024-12-06 18:00:40.630210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.030 [2024-12-06 18:00:40.630224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:40.630969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.031 [2024-12-06 18:00:40.630976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.031 12134.00 IOPS, 47.40 MiB/s [2024-12-06T17:00:54.858Z] 11200.62 IOPS, 43.75 MiB/s [2024-12-06T17:00:54.858Z] 10400.57 IOPS, 40.63 MiB/s [2024-12-06T17:00:54.858Z] 10317.13 IOPS, 40.30 MiB/s [2024-12-06T17:00:54.858Z] 10488.31 IOPS, 40.97 MiB/s [2024-12-06T17:00:54.858Z] 10852.53 IOPS, 42.39 MiB/s [2024-12-06T17:00:54.858Z] 11187.67 IOPS, 43.70 MiB/s [2024-12-06T17:00:54.858Z] 11364.68 IOPS, 44.39 MiB/s [2024-12-06T17:00:54.858Z] 11446.25 IOPS, 44.71 MiB/s [2024-12-06T17:00:54.858Z] 11545.95 IOPS, 45.10 MiB/s [2024-12-06T17:00:54.858Z] 11788.09 IOPS, 46.05 MiB/s [2024-12-06T17:00:54.858Z] 12023.52 IOPS, 46.97 MiB/s [2024-12-06T17:00:54.858Z] [2024-12-06 18:00:52.481667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.031 [2024-12-06 18:00:52.481702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:07.031 [2024-12-06 18:00:52.481732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.031 [2024-12-06 18:00:52.481739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.481865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.481870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.032 [2024-12-06 18:00:52.482877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:07.032 [2024-12-06 18:00:52.482935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.032 [2024-12-06 18:00:52.482940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.482950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.482955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.482966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.482971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.482981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.482986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.482996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.033 [2024-12-06 18:00:52.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:07.033 [2024-12-06 18:00:52.483050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.483989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.483999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:07.033 [2024-12-06 18:00:52.484224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.033 [2024-12-06 18:00:52.484229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:07.033 12091.46 IOPS, 47.23 MiB/s [2024-12-06T17:00:54.860Z] 12122.84 IOPS, 47.35 MiB/s [2024-12-06T17:00:54.860Z] Received shutdown signal, test time was about 25.181001 seconds 00:24:07.033 00:24:07.033 Latency(us) 00:24:07.033 [2024-12-06T17:00:54.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.033 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:07.033 Verification LBA range: start 0x0 length 0x4000 00:24:07.033 Nvme0n1 : 25.18 12130.03 47.38 0.00 0.00 10532.85 401.07 3019898.88 00:24:07.033 [2024-12-06T17:00:54.860Z] =================================================================================================================== 00:24:07.033 [2024-12-06T17:00:54.860Z] Total : 12130.03 47.38 0.00 0.00 10532.85 401.07 3019898.88 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:07.033 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:07.033 rmmod nvme_tcp 00:24:07.033 rmmod nvme_fabrics 00:24:07.294 rmmod nvme_keyring 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3153661 ']' 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3153661 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3153661 ']' 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3153661 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153661 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153661' 00:24:07.294 killing process with pid 3153661 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3153661 00:24:07.294 18:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3153661 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.294 18:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.830 00:24:09.830 real 0m36.408s 00:24:09.830 user 1m36.645s 00:24:09.830 sys 0m8.860s 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.830 ************************************ 00:24:09.830 END TEST nvmf_host_multipath_status 00:24:09.830 ************************************ 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.830 ************************************ 00:24:09.830 START TEST nvmf_discovery_remove_ifc 00:24:09.830 ************************************ 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:09.830 * Looking for test storage... 00:24:09.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.830 --rc genhtml_branch_coverage=1 00:24:09.830 --rc genhtml_function_coverage=1 00:24:09.830 --rc genhtml_legend=1 00:24:09.830 --rc geninfo_all_blocks=1 00:24:09.830 --rc geninfo_unexecuted_blocks=1 00:24:09.830 00:24:09.830 ' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.830 --rc genhtml_branch_coverage=1 00:24:09.830 --rc genhtml_function_coverage=1 00:24:09.830 --rc genhtml_legend=1 00:24:09.830 --rc geninfo_all_blocks=1 00:24:09.830 --rc geninfo_unexecuted_blocks=1 00:24:09.830 00:24:09.830 ' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.830 --rc genhtml_branch_coverage=1 00:24:09.830 --rc genhtml_function_coverage=1 00:24:09.830 --rc genhtml_legend=1 00:24:09.830 --rc geninfo_all_blocks=1 00:24:09.830 --rc geninfo_unexecuted_blocks=1 00:24:09.830 00:24:09.830 ' 00:24:09.830 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.831 --rc genhtml_branch_coverage=1 00:24:09.831 --rc genhtml_function_coverage=1 00:24:09.831 --rc genhtml_legend=1 00:24:09.831 --rc geninfo_all_blocks=1 00:24:09.831 --rc geninfo_unexecuted_blocks=1 00:24:09.831 00:24:09.831 ' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.831 18:00:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:15.113 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:15.113 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:15.113 Found net devices under 0000:31:00.0: cvl_0_0 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.113 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:15.114 Found net devices under 0000:31:00.1: cvl_0_1 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:24:15.114 00:24:15.114 --- 10.0.0.2 ping statistics --- 00:24:15.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.114 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:24:15.114 00:24:15.114 --- 10.0.0.1 ping statistics --- 00:24:15.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.114 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3164488 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3164488 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3164488 ']' 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.114 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.114 [2024-12-06 18:01:02.757316] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:24:15.114 [2024-12-06 18:01:02.757353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.114 [2024-12-06 18:01:02.833095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.114 [2024-12-06 18:01:02.870506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.114 [2024-12-06 18:01:02.870541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.114 [2024-12-06 18:01:02.870549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.114 [2024-12-06 18:01:02.870558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.114 [2024-12-06 18:01:02.870564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.114 [2024-12-06 18:01:02.871217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.374 18:01:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.374 [2024-12-06 18:01:02.994160] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.374 [2024-12-06 18:01:03.002421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:15.374 null0 00:24:15.374 [2024-12-06 18:01:03.034391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3164661 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3164661 /tmp/host.sock 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3164661 ']' 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:15.374 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.374 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:15.374 [2024-12-06 18:01:03.096351] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:24:15.374 [2024-12-06 18:01:03.096415] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164661 ] 00:24:15.374 [2024-12-06 18:01:03.168222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.633 [2024-12-06 18:01:03.206861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.633 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.634 18:01:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.571 [2024-12-06 18:01:04.310605] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:16.571 [2024-12-06 18:01:04.310622] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:16.571 [2024-12-06 18:01:04.310632] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:16.830 [2024-12-06 18:01:04.398883] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:16.830 [2024-12-06 18:01:04.500847] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:16.830 [2024-12-06 18:01:04.501688] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x20d0050:1 started. 00:24:16.830 [2024-12-06 18:01:04.502841] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:16.830 [2024-12-06 18:01:04.502875] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:16.830 [2024-12-06 18:01:04.502892] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:16.830 [2024-12-06 18:01:04.502903] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:16.830 [2024-12-06 18:01:04.502918] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.830 [2024-12-06 18:01:04.509929] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x20d0050 was disconnected and freed. delete nvme_qpair. 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:16.830 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.089 18:01:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.042 18:01:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.978 18:01:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:20.355 18:01:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.290 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.290 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.290 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.290 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.291 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.291 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.291 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.291 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.291 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:21.291 18:01:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:22.229 18:01:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.229 [2024-12-06 18:01:09.943975] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:22.229 [2024-12-06 18:01:09.944009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.229 [2024-12-06 18:01:09.944018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.229 [2024-12-06 18:01:09.944025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.229 [2024-12-06 18:01:09.944030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.229 [2024-12-06 18:01:09.944036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.229 [2024-12-06 18:01:09.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.229 [2024-12-06 18:01:09.944047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.229 [2024-12-06 18:01:09.944052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.229 [2024-12-06 18:01:09.944058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:22.229 [2024-12-06 18:01:09.944063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.229 [2024-12-06 18:01:09.944072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac9e0 is same with the state(6) to be set 00:24:22.229 [2024-12-06 18:01:09.953997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac9e0 (9): Bad file descriptor 00:24:22.229 [2024-12-06 18:01:09.964030] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:22.229 [2024-12-06 18:01:09.964039] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:22.229 [2024-12-06 18:01:09.964044] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:22.229 [2024-12-06 18:01:09.964048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:22.229 [2024-12-06 18:01:09.964065] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.168 18:01:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.168 [2024-12-06 18:01:10.991161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:23.168 [2024-12-06 18:01:10.991219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ac9e0 with addr=10.0.0.2, port=4420 00:24:23.168 [2024-12-06 18:01:10.991237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ac9e0 is same with the state(6) to be set 00:24:23.168 [2024-12-06 18:01:10.991272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ac9e0 (9): Bad file descriptor 00:24:23.168 [2024-12-06 18:01:10.991821] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:23.168 [2024-12-06 18:01:10.991863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.168 [2024-12-06 18:01:10.991878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.168 [2024-12-06 18:01:10.991894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.168 [2024-12-06 18:01:10.991907] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.168 [2024-12-06 18:01:10.991916] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.168 [2024-12-06 18:01:10.991925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.168 [2024-12-06 18:01:10.991939] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:23.168 [2024-12-06 18:01:10.991947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:23.430 18:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.430 18:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:23.430 18:01:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:24.370 [2024-12-06 18:01:11.994333] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:24.370 [2024-12-06 18:01:11.994359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:24.370 [2024-12-06 18:01:11.994371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:24.370 [2024-12-06 18:01:11.994376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:24.370 [2024-12-06 18:01:11.994383] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:24.370 [2024-12-06 18:01:11.994388] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:24.370 [2024-12-06 18:01:11.994392] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:24.370 [2024-12-06 18:01:11.994396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:24.370 [2024-12-06 18:01:11.994416] bdev_nvme.c:7261:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:24.370 [2024-12-06 18:01:11.994442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.370 [2024-12-06 18:01:11.994450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.370 [2024-12-06 18:01:11.994460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.370 [2024-12-06 18:01:11.994465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.370 [2024-12-06 18:01:11.994471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.370 [2024-12-06 18:01:11.994476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.370 [2024-12-06 18:01:11.994481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.370 [2024-12-06 18:01:11.994487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.370 [2024-12-06 18:01:11.994493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.370 [2024-12-06 18:01:11.994498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.370 [2024-12-06 18:01:11.994504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:24.370 [2024-12-06 18:01:11.995231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209bd20 (9): Bad file descriptor 00:24:24.370 [2024-12-06 18:01:11.996242] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:24.370 [2024-12-06 18:01:11.996251] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:24.370 18:01:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:25.383 18:01:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.388 [2024-12-06 18:01:14.057078] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:26.388 [2024-12-06 18:01:14.057097] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:26.388 [2024-12-06 18:01:14.057109] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:26.388 [2024-12-06 18:01:14.186474] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.648 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:26.648 18:01:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.648 [2024-12-06 18:01:14.367481] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:26.648 [2024-12-06 18:01:14.368198] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2085fc0:1 started. 00:24:26.648 [2024-12-06 18:01:14.369109] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:26.648 [2024-12-06 18:01:14.369136] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:26.648 [2024-12-06 18:01:14.369150] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:26.648 [2024-12-06 18:01:14.369162] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:26.649 [2024-12-06 18:01:14.369168] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:26.649 [2024-12-06 18:01:14.374318] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2085fc0 was disconnected and freed. delete nvme_qpair. 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3164661 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3164661 ']' 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3164661 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3164661 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3164661' 00:24:27.586 killing process with pid 3164661 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3164661 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3164661 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.586 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.586 rmmod nvme_tcp 00:24:27.846 rmmod nvme_fabrics 00:24:27.846 rmmod nvme_keyring 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3164488 ']' 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3164488 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3164488 ']' 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3164488 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3164488 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3164488' 00:24:27.846 killing process with pid 3164488 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3164488 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3164488 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.846 18:01:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:30.383 00:24:30.383 real 0m20.517s 00:24:30.383 user 0m25.526s 00:24:30.383 sys 0m5.316s 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.383 ************************************ 00:24:30.383 END TEST nvmf_discovery_remove_ifc 00:24:30.383 ************************************ 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.383 ************************************ 00:24:30.383 START TEST nvmf_identify_kernel_target 00:24:30.383 ************************************ 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:30.383 * Looking for test storage... 00:24:30.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:30.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.383 --rc genhtml_branch_coverage=1 00:24:30.383 --rc genhtml_function_coverage=1 00:24:30.383 --rc genhtml_legend=1 00:24:30.383 --rc geninfo_all_blocks=1 00:24:30.383 --rc geninfo_unexecuted_blocks=1 00:24:30.383 00:24:30.383 ' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:30.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.383 --rc genhtml_branch_coverage=1 00:24:30.383 --rc genhtml_function_coverage=1 00:24:30.383 --rc genhtml_legend=1 00:24:30.383 --rc geninfo_all_blocks=1 00:24:30.383 --rc geninfo_unexecuted_blocks=1 00:24:30.383 00:24:30.383 ' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:30.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.383 --rc genhtml_branch_coverage=1 00:24:30.383 --rc genhtml_function_coverage=1 00:24:30.383 --rc genhtml_legend=1 00:24:30.383 --rc geninfo_all_blocks=1 00:24:30.383 --rc geninfo_unexecuted_blocks=1 00:24:30.383 00:24:30.383 ' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:30.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.383 --rc genhtml_branch_coverage=1 00:24:30.383 --rc genhtml_function_coverage=1 00:24:30.383 --rc genhtml_legend=1 00:24:30.383 --rc geninfo_all_blocks=1 00:24:30.383 --rc geninfo_unexecuted_blocks=1 00:24:30.383 00:24:30.383 ' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.383 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.384 18:01:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.663 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.663 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.663 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.663 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.663 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:35.664 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:35.664 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:35.664 Found net devices under 0000:31:00.0: cvl_0_0 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:35.664 Found net devices under 0000:31:00.1: cvl_0_1 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.664 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.665 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.665 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.665 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.665 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.665 18:01:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:24:35.665 00:24:35.665 --- 10.0.0.2 ping statistics --- 00:24:35.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.665 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:24:35.665 00:24:35.665 --- 10.0.0.1 ping statistics --- 00:24:35.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.665 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:35.665 18:01:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:38.203 Waiting for block devices as requested 00:24:38.203 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:38.203 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:38.463 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:38.463 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:38.463 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:38.722 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:38.722 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:38.722 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:38.722 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:38.982 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:39.243 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:39.244 No valid GPT data, bailing 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:39.244 18:01:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:24:39.244 00:24:39.244 Discovery Log Number of Records 2, Generation counter 2 00:24:39.244 =====Discovery Log Entry 0====== 00:24:39.244 trtype: tcp 00:24:39.244 adrfam: ipv4 00:24:39.244 subtype: current discovery subsystem 00:24:39.244 treq: not specified, sq flow control disable supported 00:24:39.244 portid: 1 00:24:39.244 trsvcid: 4420 00:24:39.244 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:39.244 traddr: 10.0.0.1 00:24:39.244 eflags: none 00:24:39.244 sectype: none 00:24:39.244 =====Discovery Log Entry 1====== 00:24:39.244 trtype: tcp 00:24:39.244 adrfam: ipv4 00:24:39.244 subtype: nvme subsystem 00:24:39.244 treq: not specified, sq flow control disable supported 00:24:39.244 portid: 1 00:24:39.244 trsvcid: 4420 00:24:39.244 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:39.244 traddr: 10.0.0.1 00:24:39.244 eflags: none 00:24:39.244 sectype: none 00:24:39.244 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:39.244 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:39.504 ===================================================== 00:24:39.504 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:39.504 ===================================================== 00:24:39.504 Controller Capabilities/Features 00:24:39.504 ================================ 00:24:39.504 Vendor ID: 0000 00:24:39.504 Subsystem Vendor ID: 0000 00:24:39.504 Serial Number: c7051fead77bf652b752 00:24:39.504 Model Number: Linux 00:24:39.504 Firmware Version: 6.8.9-20 00:24:39.504 Recommended Arb Burst: 0 00:24:39.504 IEEE OUI Identifier: 00 00 00 00:24:39.504 Multi-path I/O 00:24:39.504 May have multiple subsystem ports: No 00:24:39.504 May have multiple controllers: No 00:24:39.504 Associated with SR-IOV VF: No 00:24:39.504 Max Data Transfer Size: Unlimited 00:24:39.504 Max Number of Namespaces: 0 00:24:39.504 Max Number of I/O Queues: 1024 00:24:39.504 NVMe Specification Version (VS): 1.3 00:24:39.504 NVMe Specification Version (Identify): 1.3 00:24:39.504 Maximum Queue Entries: 1024 00:24:39.504 Contiguous Queues Required: No 00:24:39.504 Arbitration Mechanisms Supported 00:24:39.504 Weighted Round Robin: Not Supported 00:24:39.504 Vendor Specific: Not Supported 00:24:39.504 Reset Timeout: 7500 ms 00:24:39.504 Doorbell Stride: 4 bytes 00:24:39.504 NVM Subsystem Reset: Not Supported 00:24:39.504 Command Sets Supported 00:24:39.504 NVM Command Set: Supported 00:24:39.504 Boot Partition: Not Supported 00:24:39.504 Memory Page Size Minimum: 4096 bytes 00:24:39.504 Memory Page Size Maximum: 4096 bytes 00:24:39.504 Persistent Memory Region: Not Supported 00:24:39.504 Optional Asynchronous Events Supported 00:24:39.504 Namespace Attribute Notices: Not Supported 00:24:39.504 Firmware Activation Notices: Not Supported 00:24:39.504 ANA Change Notices: Not Supported 00:24:39.504 PLE Aggregate Log Change Notices: Not Supported 00:24:39.504 LBA Status Info Alert Notices: Not Supported 00:24:39.504 EGE Aggregate Log Change Notices: Not Supported 00:24:39.504 Normal NVM Subsystem Shutdown event: Not Supported 00:24:39.504 Zone Descriptor Change Notices: Not Supported 00:24:39.504 Discovery Log Change Notices: Supported 00:24:39.504 Controller Attributes 00:24:39.504 128-bit Host Identifier: Not Supported 00:24:39.504 Non-Operational Permissive Mode: Not Supported 00:24:39.504 NVM Sets: Not Supported 00:24:39.504 Read Recovery Levels: Not Supported 00:24:39.504 Endurance Groups: Not Supported 00:24:39.504 Predictable Latency Mode: Not Supported 00:24:39.504 Traffic Based Keep ALive: Not Supported 00:24:39.504 Namespace Granularity: Not Supported 00:24:39.504 SQ Associations: Not Supported 00:24:39.504 UUID List: Not Supported 00:24:39.504 Multi-Domain Subsystem: Not Supported 00:24:39.504 Fixed Capacity Management: Not Supported 00:24:39.504 Variable Capacity Management: Not Supported 00:24:39.504 Delete Endurance Group: Not Supported 00:24:39.504 Delete NVM Set: Not Supported 00:24:39.504 Extended LBA Formats Supported: Not Supported 00:24:39.504 Flexible Data Placement Supported: Not Supported 00:24:39.504 00:24:39.504 Controller Memory Buffer Support 00:24:39.504 ================================ 00:24:39.504 Supported: No 00:24:39.504 00:24:39.504 Persistent Memory Region Support 00:24:39.504 ================================ 00:24:39.504 Supported: No 00:24:39.504 00:24:39.504 Admin Command Set Attributes 00:24:39.504 ============================ 00:24:39.504 Security Send/Receive: Not Supported 00:24:39.504 Format NVM: Not Supported 00:24:39.504 Firmware Activate/Download: Not Supported 00:24:39.504 Namespace Management: Not Supported 00:24:39.504 Device Self-Test: Not Supported 00:24:39.504 Directives: Not Supported 00:24:39.504 NVMe-MI: Not Supported 00:24:39.504 Virtualization Management: Not Supported 00:24:39.504 Doorbell Buffer Config: Not Supported 00:24:39.504 Get LBA Status Capability: Not Supported 00:24:39.504 Command & Feature Lockdown Capability: Not Supported 00:24:39.504 Abort Command Limit: 1 00:24:39.504 Async Event Request Limit: 1 00:24:39.504 Number of Firmware Slots: N/A 00:24:39.504 Firmware Slot 1 Read-Only: N/A 00:24:39.504 Firmware Activation Without Reset: N/A 00:24:39.504 Multiple Update Detection Support: N/A 00:24:39.504 Firmware Update Granularity: No Information Provided 00:24:39.504 Per-Namespace SMART Log: No 00:24:39.504 Asymmetric Namespace Access Log Page: Not Supported 00:24:39.504 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:39.504 Command Effects Log Page: Not Supported 00:24:39.504 Get Log Page Extended Data: Supported 00:24:39.504 Telemetry Log Pages: Not Supported 00:24:39.504 Persistent Event Log Pages: Not Supported 00:24:39.504 Supported Log Pages Log Page: May Support 00:24:39.504 Commands Supported & Effects Log Page: Not Supported 00:24:39.504 Feature Identifiers & Effects Log Page:May Support 00:24:39.504 NVMe-MI Commands & Effects Log Page: May Support 00:24:39.504 Data Area 4 for Telemetry Log: Not Supported 00:24:39.504 Error Log Page Entries Supported: 1 00:24:39.504 Keep Alive: Not Supported 00:24:39.504 00:24:39.504 NVM Command Set Attributes 00:24:39.504 ========================== 00:24:39.504 Submission Queue Entry Size 00:24:39.504 Max: 1 00:24:39.504 Min: 1 00:24:39.504 Completion Queue Entry Size 00:24:39.504 Max: 1 00:24:39.504 Min: 1 00:24:39.504 Number of Namespaces: 0 00:24:39.504 Compare Command: Not Supported 00:24:39.504 Write Uncorrectable Command: Not Supported 00:24:39.504 Dataset Management Command: Not Supported 00:24:39.504 Write Zeroes Command: Not Supported 00:24:39.504 Set Features Save Field: Not Supported 00:24:39.504 Reservations: Not Supported 00:24:39.504 Timestamp: Not Supported 00:24:39.504 Copy: Not Supported 00:24:39.505 Volatile Write Cache: Not Present 00:24:39.505 Atomic Write Unit (Normal): 1 00:24:39.505 Atomic Write Unit (PFail): 1 00:24:39.505 Atomic Compare & Write Unit: 1 00:24:39.505 Fused Compare & Write: Not Supported 00:24:39.505 Scatter-Gather List 00:24:39.505 SGL Command Set: Supported 00:24:39.505 SGL Keyed: Not Supported 00:24:39.505 SGL Bit Bucket Descriptor: Not Supported 00:24:39.505 SGL Metadata Pointer: Not Supported 00:24:39.505 Oversized SGL: Not Supported 00:24:39.505 SGL Metadata Address: Not Supported 00:24:39.505 SGL Offset: Supported 00:24:39.505 Transport SGL Data Block: Not Supported 00:24:39.505 Replay Protected Memory Block: Not Supported 00:24:39.505 00:24:39.505 Firmware Slot Information 00:24:39.505 ========================= 00:24:39.505 Active slot: 0 00:24:39.505 00:24:39.505 00:24:39.505 Error Log 00:24:39.505 ========= 00:24:39.505 00:24:39.505 Active Namespaces 00:24:39.505 ================= 00:24:39.505 Discovery Log Page 00:24:39.505 ================== 00:24:39.505 Generation Counter: 2 00:24:39.505 Number of Records: 2 00:24:39.505 Record Format: 0 00:24:39.505 00:24:39.505 Discovery Log Entry 0 00:24:39.505 ---------------------- 00:24:39.505 Transport Type: 3 (TCP) 00:24:39.505 Address Family: 1 (IPv4) 00:24:39.505 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:39.505 Entry Flags: 00:24:39.505 Duplicate Returned Information: 0 00:24:39.505 Explicit Persistent Connection Support for Discovery: 0 00:24:39.505 Transport Requirements: 00:24:39.505 Secure Channel: Not Specified 00:24:39.505 Port ID: 1 (0x0001) 00:24:39.505 Controller ID: 65535 (0xffff) 00:24:39.505 Admin Max SQ Size: 32 00:24:39.505 Transport Service Identifier: 4420 00:24:39.505 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:39.505 Transport Address: 10.0.0.1 00:24:39.505 Discovery Log Entry 1 00:24:39.505 ---------------------- 00:24:39.505 Transport Type: 3 (TCP) 00:24:39.505 Address Family: 1 (IPv4) 00:24:39.505 Subsystem Type: 2 (NVM Subsystem) 00:24:39.505 Entry Flags: 00:24:39.505 Duplicate Returned Information: 0 00:24:39.505 Explicit Persistent Connection Support for Discovery: 0 00:24:39.505 Transport Requirements: 00:24:39.505 Secure Channel: Not Specified 00:24:39.505 Port ID: 1 (0x0001) 00:24:39.505 Controller ID: 65535 (0xffff) 00:24:39.505 Admin Max SQ Size: 32 00:24:39.505 Transport Service Identifier: 4420 00:24:39.505 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:39.505 Transport Address: 10.0.0.1 00:24:39.505 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:39.505 get_feature(0x01) failed 00:24:39.505 get_feature(0x02) failed 00:24:39.505 get_feature(0x04) failed 00:24:39.505 ===================================================== 00:24:39.505 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:39.505 ===================================================== 00:24:39.505 Controller Capabilities/Features 00:24:39.505 ================================ 00:24:39.505 Vendor ID: 0000 00:24:39.505 Subsystem Vendor ID: 0000 00:24:39.505 Serial Number: c0d0dacc3b6d5294c89c 00:24:39.505 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:39.505 Firmware Version: 6.8.9-20 00:24:39.505 Recommended Arb Burst: 6 00:24:39.505 IEEE OUI Identifier: 00 00 00 00:24:39.505 Multi-path I/O 00:24:39.505 May have multiple subsystem ports: Yes 00:24:39.505 May have multiple controllers: Yes 00:24:39.505 Associated with SR-IOV VF: No 00:24:39.505 Max Data Transfer Size: Unlimited 00:24:39.505 Max Number of Namespaces: 1024 00:24:39.505 Max Number of I/O Queues: 128 00:24:39.505 NVMe Specification Version (VS): 1.3 00:24:39.505 NVMe Specification Version (Identify): 1.3 00:24:39.505 Maximum Queue Entries: 1024 00:24:39.505 Contiguous Queues Required: No 00:24:39.505 Arbitration Mechanisms Supported 00:24:39.505 Weighted Round Robin: Not Supported 00:24:39.505 Vendor Specific: Not Supported 00:24:39.505 Reset Timeout: 7500 ms 00:24:39.505 Doorbell Stride: 4 bytes 00:24:39.505 NVM Subsystem Reset: Not Supported 00:24:39.505 Command Sets Supported 00:24:39.505 NVM Command Set: Supported 00:24:39.505 Boot Partition: Not Supported 00:24:39.505 Memory Page Size Minimum: 4096 bytes 00:24:39.505 Memory Page Size Maximum: 4096 bytes 00:24:39.505 Persistent Memory Region: Not Supported 00:24:39.505 Optional Asynchronous Events Supported 00:24:39.505 Namespace Attribute Notices: Supported 00:24:39.505 Firmware Activation Notices: Not Supported 00:24:39.505 ANA Change Notices: Supported 00:24:39.505 PLE Aggregate Log Change Notices: Not Supported 00:24:39.505 LBA Status Info Alert Notices: Not Supported 00:24:39.505 EGE Aggregate Log Change Notices: Not Supported 00:24:39.505 Normal NVM Subsystem Shutdown event: Not Supported 00:24:39.505 Zone Descriptor Change Notices: Not Supported 00:24:39.505 Discovery Log Change Notices: Not Supported 00:24:39.505 Controller Attributes 00:24:39.505 128-bit Host Identifier: Supported 00:24:39.505 Non-Operational Permissive Mode: Not Supported 00:24:39.505 NVM Sets: Not Supported 00:24:39.505 Read Recovery Levels: Not Supported 00:24:39.505 Endurance Groups: Not Supported 00:24:39.505 Predictable Latency Mode: Not Supported 00:24:39.505 Traffic Based Keep ALive: Supported 00:24:39.505 Namespace Granularity: Not Supported 00:24:39.505 SQ Associations: Not Supported 00:24:39.505 UUID List: Not Supported 00:24:39.505 Multi-Domain Subsystem: Not Supported 00:24:39.505 Fixed Capacity Management: Not Supported 00:24:39.505 Variable Capacity Management: Not Supported 00:24:39.505 Delete Endurance Group: Not Supported 00:24:39.505 Delete NVM Set: Not Supported 00:24:39.505 Extended LBA Formats Supported: Not Supported 00:24:39.505 Flexible Data Placement Supported: Not Supported 00:24:39.505 00:24:39.505 Controller Memory Buffer Support 00:24:39.505 ================================ 00:24:39.505 Supported: No 00:24:39.505 00:24:39.505 Persistent Memory Region Support 00:24:39.505 ================================ 00:24:39.505 Supported: No 00:24:39.505 00:24:39.505 Admin Command Set Attributes 00:24:39.505 ============================ 00:24:39.505 Security Send/Receive: Not Supported 00:24:39.505 Format NVM: Not Supported 00:24:39.505 Firmware Activate/Download: Not Supported 00:24:39.506 Namespace Management: Not Supported 00:24:39.506 Device Self-Test: Not Supported 00:24:39.506 Directives: Not Supported 00:24:39.506 NVMe-MI: Not Supported 00:24:39.506 Virtualization Management: Not Supported 00:24:39.506 Doorbell Buffer Config: Not Supported 00:24:39.506 Get LBA Status Capability: Not Supported 00:24:39.506 Command & Feature Lockdown Capability: Not Supported 00:24:39.506 Abort Command Limit: 4 00:24:39.506 Async Event Request Limit: 4 00:24:39.506 Number of Firmware Slots: N/A 00:24:39.506 Firmware Slot 1 Read-Only: N/A 00:24:39.506 Firmware Activation Without Reset: N/A 00:24:39.506 Multiple Update Detection Support: N/A 00:24:39.506 Firmware Update Granularity: No Information Provided 00:24:39.506 Per-Namespace SMART Log: Yes 00:24:39.506 Asymmetric Namespace Access Log Page: Supported 00:24:39.506 ANA Transition Time : 10 sec 00:24:39.506 00:24:39.506 Asymmetric Namespace Access Capabilities 00:24:39.506 ANA Optimized State : Supported 00:24:39.506 ANA Non-Optimized State : Supported 00:24:39.506 ANA Inaccessible State : Supported 00:24:39.506 ANA Persistent Loss State : Supported 00:24:39.506 ANA Change State : Supported 00:24:39.506 ANAGRPID is not changed : No 00:24:39.506 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:39.506 00:24:39.506 ANA Group Identifier Maximum : 128 00:24:39.506 Number of ANA Group Identifiers : 128 00:24:39.506 Max Number of Allowed Namespaces : 1024 00:24:39.506 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:39.506 Command Effects Log Page: Supported 00:24:39.506 Get Log Page Extended Data: Supported 00:24:39.506 Telemetry Log Pages: Not Supported 00:24:39.506 Persistent Event Log Pages: Not Supported 00:24:39.506 Supported Log Pages Log Page: May Support 00:24:39.506 Commands Supported & Effects Log Page: Not Supported 00:24:39.506 Feature Identifiers & Effects Log Page:May Support 00:24:39.506 NVMe-MI Commands & Effects Log Page: May Support 00:24:39.506 Data Area 4 for Telemetry Log: Not Supported 00:24:39.506 Error Log Page Entries Supported: 128 00:24:39.506 Keep Alive: Supported 00:24:39.506 Keep Alive Granularity: 1000 ms 00:24:39.506 00:24:39.506 NVM Command Set Attributes 00:24:39.506 ========================== 00:24:39.506 Submission Queue Entry Size 00:24:39.506 Max: 64 00:24:39.506 Min: 64 00:24:39.506 Completion Queue Entry Size 00:24:39.506 Max: 16 00:24:39.506 Min: 16 00:24:39.506 Number of Namespaces: 1024 00:24:39.506 Compare Command: Not Supported 00:24:39.506 Write Uncorrectable Command: Not Supported 00:24:39.506 Dataset Management Command: Supported 00:24:39.506 Write Zeroes Command: Supported 00:24:39.506 Set Features Save Field: Not Supported 00:24:39.506 Reservations: Not Supported 00:24:39.506 Timestamp: Not Supported 00:24:39.506 Copy: Not Supported 00:24:39.506 Volatile Write Cache: Present 00:24:39.506 Atomic Write Unit (Normal): 1 00:24:39.506 Atomic Write Unit (PFail): 1 00:24:39.506 Atomic Compare & Write Unit: 1 00:24:39.506 Fused Compare & Write: Not Supported 00:24:39.506 Scatter-Gather List 00:24:39.506 SGL Command Set: Supported 00:24:39.506 SGL Keyed: Not Supported 00:24:39.506 SGL Bit Bucket Descriptor: Not Supported 00:24:39.506 SGL Metadata Pointer: Not Supported 00:24:39.506 Oversized SGL: Not Supported 00:24:39.506 SGL Metadata Address: Not Supported 00:24:39.506 SGL Offset: Supported 00:24:39.506 Transport SGL Data Block: Not Supported 00:24:39.506 Replay Protected Memory Block: Not Supported 00:24:39.506 00:24:39.506 Firmware Slot Information 00:24:39.506 ========================= 00:24:39.506 Active slot: 0 00:24:39.506 00:24:39.506 Asymmetric Namespace Access 00:24:39.506 =========================== 00:24:39.506 Change Count : 0 00:24:39.506 Number of ANA Group Descriptors : 1 00:24:39.506 ANA Group Descriptor : 0 00:24:39.506 ANA Group ID : 1 00:24:39.506 Number of NSID Values : 1 00:24:39.506 Change Count : 0 00:24:39.506 ANA State : 1 00:24:39.506 Namespace Identifier : 1 00:24:39.506 00:24:39.506 Commands Supported and Effects 00:24:39.506 ============================== 00:24:39.506 Admin Commands 00:24:39.506 -------------- 00:24:39.506 Get Log Page (02h): Supported 00:24:39.506 Identify (06h): Supported 00:24:39.506 Abort (08h): Supported 00:24:39.506 Set Features (09h): Supported 00:24:39.506 Get Features (0Ah): Supported 00:24:39.506 Asynchronous Event Request (0Ch): Supported 00:24:39.506 Keep Alive (18h): Supported 00:24:39.506 I/O Commands 00:24:39.506 ------------ 00:24:39.506 Flush (00h): Supported 00:24:39.506 Write (01h): Supported LBA-Change 00:24:39.506 Read (02h): Supported 00:24:39.506 Write Zeroes (08h): Supported LBA-Change 00:24:39.506 Dataset Management (09h): Supported 00:24:39.506 00:24:39.506 Error Log 00:24:39.506 ========= 00:24:39.506 Entry: 0 00:24:39.506 Error Count: 0x3 00:24:39.506 Submission Queue Id: 0x0 00:24:39.506 Command Id: 0x5 00:24:39.506 Phase Bit: 0 00:24:39.506 Status Code: 0x2 00:24:39.506 Status Code Type: 0x0 00:24:39.506 Do Not Retry: 1 00:24:39.506 Error Location: 0x28 00:24:39.506 LBA: 0x0 00:24:39.506 Namespace: 0x0 00:24:39.506 Vendor Log Page: 0x0 00:24:39.506 ----------- 00:24:39.506 Entry: 1 00:24:39.506 Error Count: 0x2 00:24:39.506 Submission Queue Id: 0x0 00:24:39.506 Command Id: 0x5 00:24:39.506 Phase Bit: 0 00:24:39.506 Status Code: 0x2 00:24:39.506 Status Code Type: 0x0 00:24:39.506 Do Not Retry: 1 00:24:39.506 Error Location: 0x28 00:24:39.506 LBA: 0x0 00:24:39.506 Namespace: 0x0 00:24:39.506 Vendor Log Page: 0x0 00:24:39.506 ----------- 00:24:39.506 Entry: 2 00:24:39.506 Error Count: 0x1 00:24:39.506 Submission Queue Id: 0x0 00:24:39.506 Command Id: 0x4 00:24:39.506 Phase Bit: 0 00:24:39.506 Status Code: 0x2 00:24:39.506 Status Code Type: 0x0 00:24:39.506 Do Not Retry: 1 00:24:39.506 Error Location: 0x28 00:24:39.506 LBA: 0x0 00:24:39.506 Namespace: 0x0 00:24:39.506 Vendor Log Page: 0x0 00:24:39.506 00:24:39.506 Number of Queues 00:24:39.506 ================ 00:24:39.506 Number of I/O Submission Queues: 128 00:24:39.506 Number of I/O Completion Queues: 128 00:24:39.506 00:24:39.506 ZNS Specific Controller Data 00:24:39.506 ============================ 00:24:39.506 Zone Append Size Limit: 0 00:24:39.507 00:24:39.507 00:24:39.507 Active Namespaces 00:24:39.507 ================= 00:24:39.507 get_feature(0x05) failed 00:24:39.507 Namespace ID:1 00:24:39.507 Command Set Identifier: NVM (00h) 00:24:39.507 Deallocate: Supported 00:24:39.507 Deallocated/Unwritten Error: Not Supported 00:24:39.507 Deallocated Read Value: Unknown 00:24:39.507 Deallocate in Write Zeroes: Not Supported 00:24:39.507 Deallocated Guard Field: 0xFFFF 00:24:39.507 Flush: Supported 00:24:39.507 Reservation: Not Supported 00:24:39.507 Namespace Sharing Capabilities: Multiple Controllers 00:24:39.507 Size (in LBAs): 3750748848 (1788GiB) 00:24:39.507 Capacity (in LBAs): 3750748848 (1788GiB) 00:24:39.507 Utilization (in LBAs): 3750748848 (1788GiB) 00:24:39.507 UUID: 5869dba6-adb2-4b25-b635-96cd70c0167e 00:24:39.507 Thin Provisioning: Not Supported 00:24:39.507 Per-NS Atomic Units: Yes 00:24:39.507 Atomic Write Unit (Normal): 8 00:24:39.507 Atomic Write Unit (PFail): 8 00:24:39.507 Preferred Write Granularity: 8 00:24:39.507 Atomic Compare & Write Unit: 8 00:24:39.507 Atomic Boundary Size (Normal): 0 00:24:39.507 Atomic Boundary Size (PFail): 0 00:24:39.507 Atomic Boundary Offset: 0 00:24:39.507 NGUID/EUI64 Never Reused: No 00:24:39.507 ANA group ID: 1 00:24:39.507 Namespace Write Protected: No 00:24:39.507 Number of LBA Formats: 1 00:24:39.507 Current LBA Format: LBA Format #00 00:24:39.507 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:39.507 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:39.507 rmmod nvme_tcp 00:24:39.507 rmmod nvme_fabrics 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.507 18:01:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:42.039 18:01:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:43.950 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:43.950 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:44.210 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:44.210 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:44.210 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:44.210 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:44.210 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:46.126 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:46.126 00:24:46.126 real 0m16.182s 00:24:46.126 user 0m3.432s 00:24:46.126 sys 0m7.964s 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.126 ************************************ 00:24:46.126 END TEST nvmf_identify_kernel_target 00:24:46.126 ************************************ 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.126 ************************************ 00:24:46.126 START TEST nvmf_auth_host 00:24:46.126 ************************************ 00:24:46.126 18:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:46.386 * Looking for test storage... 00:24:46.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:46.386 18:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:46.386 18:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:46.386 18:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:46.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.386 --rc genhtml_branch_coverage=1 00:24:46.386 --rc genhtml_function_coverage=1 00:24:46.386 --rc genhtml_legend=1 00:24:46.386 --rc geninfo_all_blocks=1 00:24:46.386 --rc geninfo_unexecuted_blocks=1 00:24:46.386 00:24:46.386 ' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:46.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.386 --rc genhtml_branch_coverage=1 00:24:46.386 --rc genhtml_function_coverage=1 00:24:46.386 --rc genhtml_legend=1 00:24:46.386 --rc geninfo_all_blocks=1 00:24:46.386 --rc geninfo_unexecuted_blocks=1 00:24:46.386 00:24:46.386 ' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:46.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.386 --rc genhtml_branch_coverage=1 00:24:46.386 --rc genhtml_function_coverage=1 00:24:46.386 --rc genhtml_legend=1 00:24:46.386 --rc geninfo_all_blocks=1 00:24:46.386 --rc geninfo_unexecuted_blocks=1 00:24:46.386 00:24:46.386 ' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:46.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:46.386 --rc genhtml_branch_coverage=1 00:24:46.386 --rc genhtml_function_coverage=1 00:24:46.386 --rc genhtml_legend=1 00:24:46.386 --rc geninfo_all_blocks=1 00:24:46.386 --rc geninfo_unexecuted_blocks=1 00:24:46.386 00:24:46.386 ' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.386 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:46.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:46.387 18:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:51.693 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:51.693 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.693 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:51.693 Found net devices under 0000:31:00.0: cvl_0_0 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:51.694 Found net devices under 0000:31:00.1: cvl_0_1 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.694 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:51.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:24:51.953 00:24:51.953 --- 10.0.0.2 ping statistics --- 00:24:51.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.953 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:24:51.953 00:24:51.953 --- 10.0.0.1 ping statistics --- 00:24:51.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.953 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3179601 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3179601 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3179601 ']' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.953 18:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b848ee1b8bd900b18924eda4ec505193 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9DX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b848ee1b8bd900b18924eda4ec505193 0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b848ee1b8bd900b18924eda4ec505193 0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b848ee1b8bd900b18924eda4ec505193 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9DX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9DX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.9DX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fab0ac8759de19d05b5ba9620926e639df22f5f6933434a0b94c35f2030b5b96 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jJw 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fab0ac8759de19d05b5ba9620926e639df22f5f6933434a0b94c35f2030b5b96 3 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fab0ac8759de19d05b5ba9620926e639df22f5f6933434a0b94c35f2030b5b96 3 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fab0ac8759de19d05b5ba9620926e639df22f5f6933434a0b94c35f2030b5b96 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jJw 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jJw 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.jJw 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=54fb28a666f99a993b1e18ff54030abf3ecde48d3a122d3e 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.H57 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 54fb28a666f99a993b1e18ff54030abf3ecde48d3a122d3e 0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 54fb28a666f99a993b1e18ff54030abf3ecde48d3a122d3e 0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=54fb28a666f99a993b1e18ff54030abf3ecde48d3a122d3e 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.H57 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.H57 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.H57 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fc28954475e4e2cd0ca83914b4def2b4678288cfcf128d48 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zDs 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fc28954475e4e2cd0ca83914b4def2b4678288cfcf128d48 2 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fc28954475e4e2cd0ca83914b4def2b4678288cfcf128d48 2 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fc28954475e4e2cd0ca83914b4def2b4678288cfcf128d48 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zDs 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zDs 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zDs 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d44c480a05a0c509b01443560268a53c 00:24:52.891 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nuo 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d44c480a05a0c509b01443560268a53c 1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d44c480a05a0c509b01443560268a53c 1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d44c480a05a0c509b01443560268a53c 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nuo 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nuo 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.nuo 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=39474510fdc3099d6c545d1f79a56912 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OKk 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 39474510fdc3099d6c545d1f79a56912 1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 39474510fdc3099d6c545d1f79a56912 1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=39474510fdc3099d6c545d1f79a56912 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OKk 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OKk 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OKk 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=148437f0aef9f60ecfdce6a0e9dc52cbf3e0bacebb296cfb 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XR1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 148437f0aef9f60ecfdce6a0e9dc52cbf3e0bacebb296cfb 2 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 148437f0aef9f60ecfdce6a0e9dc52cbf3e0bacebb296cfb 2 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=148437f0aef9f60ecfdce6a0e9dc52cbf3e0bacebb296cfb 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XR1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XR1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.XR1 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=862c4ec0ebc5c114d846cfb4887e61ab 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.TQE 00:24:53.151 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 862c4ec0ebc5c114d846cfb4887e61ab 0 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 862c4ec0ebc5c114d846cfb4887e61ab 0 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=862c4ec0ebc5c114d846cfb4887e61ab 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.TQE 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.TQE 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.TQE 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b790488f63ec868ff4c6d73b2160a9120bbe06e9cee2c97fb73d902283f68248 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.IxO 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b790488f63ec868ff4c6d73b2160a9120bbe06e9cee2c97fb73d902283f68248 3 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b790488f63ec868ff4c6d73b2160a9120bbe06e9cee2c97fb73d902283f68248 3 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b790488f63ec868ff4c6d73b2160a9120bbe06e9cee2c97fb73d902283f68248 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.IxO 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.IxO 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.IxO 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3179601 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3179601 ']' 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.152 18:01:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.9DX 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.jJw ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jJw 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.H57 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zDs ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zDs 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.nuo 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OKk ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OKk 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.XR1 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.TQE ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.TQE 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.IxO 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:53.412 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:53.671 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:53.671 18:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:56.206 Waiting for block devices as requested 00:24:56.206 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:56.206 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:56.466 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:56.466 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:56.466 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:56.725 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:56.725 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:56.725 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:56.725 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:56.983 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:57.549 No valid GPT data, bailing 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:57.549 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:24:57.807 00:24:57.807 Discovery Log Number of Records 2, Generation counter 2 00:24:57.807 =====Discovery Log Entry 0====== 00:24:57.807 trtype: tcp 00:24:57.807 adrfam: ipv4 00:24:57.807 subtype: current discovery subsystem 00:24:57.807 treq: not specified, sq flow control disable supported 00:24:57.807 portid: 1 00:24:57.807 trsvcid: 4420 00:24:57.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:57.807 traddr: 10.0.0.1 00:24:57.807 eflags: none 00:24:57.807 sectype: none 00:24:57.807 =====Discovery Log Entry 1====== 00:24:57.807 trtype: tcp 00:24:57.807 adrfam: ipv4 00:24:57.807 subtype: nvme subsystem 00:24:57.807 treq: not specified, sq flow control disable supported 00:24:57.807 portid: 1 00:24:57.807 trsvcid: 4420 00:24:57.807 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:57.807 traddr: 10.0.0.1 00:24:57.807 eflags: none 00:24:57.807 sectype: none 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.807 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 nvme0n1 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.808 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.066 nvme0n1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.066 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.323 nvme0n1 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.323 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.324 18:01:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.324 nvme0n1 00:24:58.324 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.324 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.324 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.324 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.324 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.324 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.582 nvme0n1 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.582 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.841 nvme0n1 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.841 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.842 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.101 nvme0n1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.101 nvme0n1 00:24:59.101 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.360 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.361 18:01:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.361 nvme0n1 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.361 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.621 nvme0n1 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.621 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.880 nvme0n1 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:59.880 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.881 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.139 nvme0n1 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.139 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.140 18:01:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.399 nvme0n1 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.399 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.400 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.658 nvme0n1 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.658 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.659 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.918 nvme0n1 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.918 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.177 nvme0n1 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.177 18:01:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.177 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.177 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.177 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.436 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 nvme0n1 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:01.694 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.695 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.953 nvme0n1 00:25:01.953 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.954 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.954 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.954 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.954 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.214 18:01:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.475 nvme0n1 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.475 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.046 nvme0n1 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.046 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.047 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.047 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.047 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.047 18:01:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.307 nvme0n1 00:25:03.307 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.307 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.308 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.878 nvme0n1 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.878 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.879 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:03.879 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.879 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.139 18:01:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.708 nvme0n1 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.708 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.709 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.709 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.709 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.709 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.709 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.709 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.278 nvme0n1 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.278 18:01:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.847 nvme0n1 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.847 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.848 18:01:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 nvme0n1 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.420 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 nvme0n1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.680 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.941 nvme0n1 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.941 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.941 nvme0n1 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.201 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.202 nvme0n1 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.202 18:01:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.202 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 nvme0n1 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.462 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.721 nvme0n1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.721 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.980 nvme0n1 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.980 nvme0n1 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.980 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.240 18:01:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.240 nvme0n1 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.240 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.502 nvme0n1 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.502 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 nvme0n1 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:08.762 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.763 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.023 nvme0n1 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.023 18:01:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.285 nvme0n1 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.285 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.546 nvme0n1 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.546 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.806 nvme0n1 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.806 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.065 18:01:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.326 nvme0n1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.326 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.897 nvme0n1 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.897 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.158 nvme0n1 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.158 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.159 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.159 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.159 18:01:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.729 nvme0n1 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.729 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 nvme0n1 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.989 18:01:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.568 nvme0n1 00:25:12.568 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.569 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.926 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.221 nvme0n1 00:25:13.221 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.221 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.221 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.221 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.221 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.221 18:02:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.221 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.479 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.479 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.479 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.479 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.479 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.479 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.046 nvme0n1 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.046 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.047 18:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.614 nvme0n1 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.614 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.183 nvme0n1 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.183 18:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.442 nvme0n1 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:15.442 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.443 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.702 nvme0n1 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.702 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.703 nvme0n1 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.703 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.963 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.964 nvme0n1 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.964 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.224 nvme0n1 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.224 18:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 nvme0n1 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.484 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.485 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.485 nvme0n1 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.746 nvme0n1 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:16.746 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.747 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.006 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 nvme0n1 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.007 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.267 nvme0n1 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.267 18:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.267 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.527 nvme0n1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.527 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.786 nvme0n1 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.786 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 nvme0n1 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.046 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.047 18:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.307 nvme0n1 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.307 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.568 nvme0n1 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.568 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.139 nvme0n1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.140 18:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.400 nvme0n1 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:19.400 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.401 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.968 nvme0n1 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.968 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.227 nvme0n1 00:25:20.228 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.228 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.228 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.228 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.228 18:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.228 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.796 nvme0n1 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjg0OGVlMWI4YmQ5MDBiMTg5MjRlZGE0ZWM1MDUxOTOJ7O8E: 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmFiMGFjODc1OWRlMTlkMDViNWJhOTYyMDkyNmU2MzlkZjIyZjVmNjkzMzQzNGEwYjk0YzM1ZjIwMzBiNWI5NhzFveg=: 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.796 18:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.381 nvme0n1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.381 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.950 nvme0n1 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.950 18:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.518 nvme0n1 00:25:22.518 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.518 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.518 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.518 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.518 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.518 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTQ4NDM3ZjBhZWY5ZjYwZWNmZGNlNmEwZTlkYzUyY2JmM2UwYmFjZWJiMjk2Y2ZicCYsRA==: 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODYyYzRlYzBlYmM1YzExNGQ4NDZjZmI0ODg3ZTYxYWLhdbjB: 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.778 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.346 nvme0n1 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.346 18:02:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Yjc5MDQ4OGY2M2VjODY4ZmY0YzZkNzNiMjE2MGE5MTIwYmJlMDZlOWNlZTJjOTdmYjczZDkwMjI4M2Y2ODI0OA8N0mw=: 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.346 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.916 nvme0n1 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.916 request: 00:25:23.916 { 00:25:23.916 "name": "nvme0", 00:25:23.916 "trtype": "tcp", 00:25:23.916 "traddr": "10.0.0.1", 00:25:23.916 "adrfam": "ipv4", 00:25:23.916 "trsvcid": "4420", 00:25:23.916 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:23.916 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:23.916 "prchk_reftag": false, 00:25:23.916 "prchk_guard": false, 00:25:23.916 "hdgst": false, 00:25:23.916 "ddgst": false, 00:25:23.916 "allow_unrecognized_csi": false, 00:25:23.916 "method": "bdev_nvme_attach_controller", 00:25:23.916 "req_id": 1 00:25:23.916 } 00:25:23.916 Got JSON-RPC error response 00:25:23.916 response: 00:25:23.916 { 00:25:23.916 "code": -5, 00:25:23.916 "message": "Input/output error" 00:25:23.916 } 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:23.916 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.917 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.176 request: 00:25:24.176 { 00:25:24.176 "name": "nvme0", 00:25:24.176 "trtype": "tcp", 00:25:24.176 "traddr": "10.0.0.1", 00:25:24.176 "adrfam": "ipv4", 00:25:24.176 "trsvcid": "4420", 00:25:24.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:24.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:24.176 "prchk_reftag": false, 00:25:24.176 "prchk_guard": false, 00:25:24.176 "hdgst": false, 00:25:24.176 "ddgst": false, 00:25:24.176 "dhchap_key": "key2", 00:25:24.176 "allow_unrecognized_csi": false, 00:25:24.176 "method": "bdev_nvme_attach_controller", 00:25:24.176 "req_id": 1 00:25:24.176 } 00:25:24.176 Got JSON-RPC error response 00:25:24.176 response: 00:25:24.176 { 00:25:24.176 "code": -5, 00:25:24.176 "message": "Input/output error" 00:25:24.176 } 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.176 request: 00:25:24.176 { 00:25:24.176 "name": "nvme0", 00:25:24.176 "trtype": "tcp", 00:25:24.176 "traddr": "10.0.0.1", 00:25:24.176 "adrfam": "ipv4", 00:25:24.176 "trsvcid": "4420", 00:25:24.176 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:24.176 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:24.176 "prchk_reftag": false, 00:25:24.176 "prchk_guard": false, 00:25:24.176 "hdgst": false, 00:25:24.176 "ddgst": false, 00:25:24.176 "dhchap_key": "key1", 00:25:24.176 "dhchap_ctrlr_key": "ckey2", 00:25:24.176 "allow_unrecognized_csi": false, 00:25:24.176 "method": "bdev_nvme_attach_controller", 00:25:24.176 "req_id": 1 00:25:24.176 } 00:25:24.176 Got JSON-RPC error response 00:25:24.176 response: 00:25:24.176 { 00:25:24.176 "code": -5, 00:25:24.176 "message": "Input/output error" 00:25:24.176 } 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:24.176 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.177 18:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.436 nvme0n1 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.436 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.437 request: 00:25:24.437 { 00:25:24.437 "name": "nvme0", 00:25:24.437 "dhchap_key": "key1", 00:25:24.437 "dhchap_ctrlr_key": "ckey2", 00:25:24.437 "method": "bdev_nvme_set_keys", 00:25:24.437 "req_id": 1 00:25:24.437 } 00:25:24.437 Got JSON-RPC error response 00:25:24.437 response: 00:25:24.437 { 00:25:24.437 "code": -13, 00:25:24.437 "message": "Permission denied" 00:25:24.437 } 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:24.437 18:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:25.816 18:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTRmYjI4YTY2NmY5OWE5OTNiMWUxOGZmNTQwMzBhYmYzZWNkZTQ4ZDNhMTIyZDNlz13tTA==: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmMyODk1NDQ3NWU0ZTJjZDBjYTgzOTE0YjRkZWYyYjQ2NzgyODhjZmNmMTI4ZDQ4+G/rHg==: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.754 nvme0n1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDQ0YzQ4MGEwNWEwYzUwOWIwMTQ0MzU2MDI2OGE1M2OWZMOu: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Mzk0NzQ1MTBmZGMzMDk5ZDZjNTQ1ZDFmNzlhNTY5MTLeJ9hV: 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.754 request: 00:25:26.754 { 00:25:26.754 "name": "nvme0", 00:25:26.754 "dhchap_key": "key2", 00:25:26.754 "dhchap_ctrlr_key": "ckey1", 00:25:26.754 "method": "bdev_nvme_set_keys", 00:25:26.754 "req_id": 1 00:25:26.754 } 00:25:26.754 Got JSON-RPC error response 00:25:26.754 response: 00:25:26.754 { 00:25:26.754 "code": -13, 00:25:26.754 "message": "Permission denied" 00:25:26.754 } 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:26.754 18:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.692 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.950 rmmod nvme_tcp 00:25:27.950 rmmod nvme_fabrics 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3179601 ']' 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3179601 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3179601 ']' 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3179601 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3179601 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3179601' 00:25:27.950 killing process with pid 3179601 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3179601 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3179601 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.950 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.951 18:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:30.487 18:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:32.389 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:32.389 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:32.647 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:32.647 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:32.647 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:32.647 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:32.647 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:25:32.907 18:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.9DX /tmp/spdk.key-null.H57 /tmp/spdk.key-sha256.nuo /tmp/spdk.key-sha384.XR1 /tmp/spdk.key-sha512.IxO /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:32.907 18:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:35.442 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:25:35.442 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:25:35.442 00:25:35.442 real 0m49.235s 00:25:35.442 user 0m43.114s 00:25:35.442 sys 0m11.477s 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.442 ************************************ 00:25:35.442 END TEST nvmf_auth_host 00:25:35.442 ************************************ 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.442 ************************************ 00:25:35.442 START TEST nvmf_digest 00:25:35.442 ************************************ 00:25:35.442 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:35.702 * Looking for test storage... 00:25:35.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.702 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:35.702 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:35.702 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:35.702 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:35.702 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.702 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.703 --rc genhtml_branch_coverage=1 00:25:35.703 --rc genhtml_function_coverage=1 00:25:35.703 --rc genhtml_legend=1 00:25:35.703 --rc geninfo_all_blocks=1 00:25:35.703 --rc geninfo_unexecuted_blocks=1 00:25:35.703 00:25:35.703 ' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.703 --rc genhtml_branch_coverage=1 00:25:35.703 --rc genhtml_function_coverage=1 00:25:35.703 --rc genhtml_legend=1 00:25:35.703 --rc geninfo_all_blocks=1 00:25:35.703 --rc geninfo_unexecuted_blocks=1 00:25:35.703 00:25:35.703 ' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.703 --rc genhtml_branch_coverage=1 00:25:35.703 --rc genhtml_function_coverage=1 00:25:35.703 --rc genhtml_legend=1 00:25:35.703 --rc geninfo_all_blocks=1 00:25:35.703 --rc geninfo_unexecuted_blocks=1 00:25:35.703 00:25:35.703 ' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:35.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.703 --rc genhtml_branch_coverage=1 00:25:35.703 --rc genhtml_function_coverage=1 00:25:35.703 --rc genhtml_legend=1 00:25:35.703 --rc geninfo_all_blocks=1 00:25:35.703 --rc geninfo_unexecuted_blocks=1 00:25:35.703 00:25:35.703 ' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.703 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.704 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.704 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.704 18:02:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:41.019 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:41.019 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:41.019 Found net devices under 0000:31:00.0: cvl_0_0 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:41.019 Found net devices under 0000:31:00.1: cvl_0_1 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.019 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.020 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:25:41.278 00:25:41.278 --- 10.0.0.2 ping statistics --- 00:25:41.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.278 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:41.278 00:25:41.278 --- 10.0.0.1 ping statistics --- 00:25:41.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.278 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.278 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:41.278 ************************************ 00:25:41.278 START TEST nvmf_digest_clean 00:25:41.278 ************************************ 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3196552 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3196552 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3196552 ']' 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:41.279 18:02:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.279 [2024-12-06 18:02:28.971527] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:41.279 [2024-12-06 18:02:28.971576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.279 [2024-12-06 18:02:29.047782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.279 [2024-12-06 18:02:29.076397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.279 [2024-12-06 18:02:29.076425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.279 [2024-12-06 18:02:29.076431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.279 [2024-12-06 18:02:29.076436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.279 [2024-12-06 18:02:29.076440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.279 [2024-12-06 18:02:29.076908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.213 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.214 null0 00:25:42.214 [2024-12-06 18:02:29.842524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.214 [2024-12-06 18:02:29.866728] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3196897 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3196897 /var/tmp/bperf.sock 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3196897 ']' 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:42.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:42.214 18:02:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:42.214 [2024-12-06 18:02:29.904745] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:42.214 [2024-12-06 18:02:29.904792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196897 ] 00:25:42.214 [2024-12-06 18:02:29.982779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.214 [2024-12-06 18:02:30.019211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.144 18:02:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.402 nvme0n1 00:25:43.402 18:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:43.402 18:02:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.402 Running I/O for 2 seconds... 00:25:45.717 20640.00 IOPS, 80.62 MiB/s [2024-12-06T17:02:33.544Z] 23490.50 IOPS, 91.76 MiB/s 00:25:45.717 Latency(us) 00:25:45.717 [2024-12-06T17:02:33.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.717 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:45.717 nvme0n1 : 2.04 23053.28 90.05 0.00 0.00 5437.49 2225.49 44346.03 00:25:45.717 [2024-12-06T17:02:33.544Z] =================================================================================================================== 00:25:45.717 [2024-12-06T17:02:33.544Z] Total : 23053.28 90.05 0.00 0.00 5437.49 2225.49 44346.03 00:25:45.717 { 00:25:45.717 "results": [ 00:25:45.717 { 00:25:45.717 "job": "nvme0n1", 00:25:45.717 "core_mask": "0x2", 00:25:45.717 "workload": "randread", 00:25:45.717 "status": "finished", 00:25:45.717 "queue_depth": 128, 00:25:45.717 "io_size": 4096, 00:25:45.717 "runtime": 2.043484, 00:25:45.717 "iops": 23053.275680161918, 00:25:45.717 "mibps": 90.05185812563249, 00:25:45.717 "io_failed": 0, 00:25:45.717 "io_timeout": 0, 00:25:45.717 "avg_latency_us": 5437.488618027694, 00:25:45.717 "min_latency_us": 2225.4933333333333, 00:25:45.717 "max_latency_us": 44346.026666666665 00:25:45.717 } 00:25:45.717 ], 00:25:45.717 "core_count": 1 00:25:45.717 } 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:45.717 | select(.opcode=="crc32c") 00:25:45.717 | "\(.module_name) \(.executed)"' 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3196897 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3196897 ']' 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3196897 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196897 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196897' 00:25:45.717 killing process with pid 3196897 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3196897 00:25:45.717 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.717 00:25:45.717 Latency(us) 00:25:45.717 [2024-12-06T17:02:33.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.717 [2024-12-06T17:02:33.544Z] =================================================================================================================== 00:25:45.717 [2024-12-06T17:02:33.544Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.717 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3196897 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3197583 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3197583 /var/tmp/bperf.sock 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3197583 ']' 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:45.976 [2024-12-06 18:02:33.591456] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:45.976 [2024-12-06 18:02:33.591512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197583 ] 00:25:45.976 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.976 Zero copy mechanism will not be used. 00:25:45.976 [2024-12-06 18:02:33.656305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.976 [2024-12-06 18:02:33.685431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:45.976 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:46.235 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.235 18:02:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.494 nvme0n1 00:25:46.494 18:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:46.494 18:02:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.753 Zero copy mechanism will not be used. 00:25:46.753 Running I/O for 2 seconds... 00:25:48.623 5819.00 IOPS, 727.38 MiB/s [2024-12-06T17:02:36.450Z] 5123.50 IOPS, 640.44 MiB/s 00:25:48.623 Latency(us) 00:25:48.623 [2024-12-06T17:02:36.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.623 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:48.623 nvme0n1 : 2.00 5122.10 640.26 0.00 0.00 3121.08 546.13 8956.59 00:25:48.623 [2024-12-06T17:02:36.450Z] =================================================================================================================== 00:25:48.623 [2024-12-06T17:02:36.450Z] Total : 5122.10 640.26 0.00 0.00 3121.08 546.13 8956.59 00:25:48.623 { 00:25:48.623 "results": [ 00:25:48.623 { 00:25:48.623 "job": "nvme0n1", 00:25:48.623 "core_mask": "0x2", 00:25:48.623 "workload": "randread", 00:25:48.623 "status": "finished", 00:25:48.623 "queue_depth": 16, 00:25:48.623 "io_size": 131072, 00:25:48.623 "runtime": 2.003669, 00:25:48.623 "iops": 5122.103501127182, 00:25:48.623 "mibps": 640.2629376408978, 00:25:48.623 "io_failed": 0, 00:25:48.623 "io_timeout": 0, 00:25:48.623 "avg_latency_us": 3121.0759375101497, 00:25:48.623 "min_latency_us": 546.1333333333333, 00:25:48.623 "max_latency_us": 8956.586666666666 00:25:48.623 } 00:25:48.623 ], 00:25:48.623 "core_count": 1 00:25:48.623 } 00:25:48.623 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:48.623 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:48.623 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:48.623 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:48.623 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:48.623 | select(.opcode=="crc32c") 00:25:48.623 | "\(.module_name) \(.executed)"' 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3197583 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3197583 ']' 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3197583 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197583 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197583' 00:25:48.882 killing process with pid 3197583 00:25:48.882 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3197583 00:25:48.882 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.882 00:25:48.882 Latency(us) 00:25:48.882 [2024-12-06T17:02:36.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.882 [2024-12-06T17:02:36.710Z] =================================================================================================================== 00:25:48.883 [2024-12-06T17:02:36.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.883 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3197583 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3198261 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3198261 /var/tmp/bperf.sock 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3198261 ']' 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:49.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:49.142 [2024-12-06 18:02:36.744763] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:49.142 [2024-12-06 18:02:36.744817] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198261 ] 00:25:49.142 [2024-12-06 18:02:36.809361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.142 [2024-12-06 18:02:36.838900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:49.142 18:02:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:49.402 18:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.402 18:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.971 nvme0n1 00:25:49.971 18:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:49.971 18:02:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.971 Running I/O for 2 seconds... 00:25:51.848 30631.00 IOPS, 119.65 MiB/s [2024-12-06T17:02:39.675Z] 30717.00 IOPS, 119.99 MiB/s 00:25:51.848 Latency(us) 00:25:51.848 [2024-12-06T17:02:39.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.848 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.848 nvme0n1 : 2.01 30728.53 120.03 0.00 0.00 4159.97 2088.96 9229.65 00:25:51.848 [2024-12-06T17:02:39.675Z] =================================================================================================================== 00:25:51.848 [2024-12-06T17:02:39.675Z] Total : 30728.53 120.03 0.00 0.00 4159.97 2088.96 9229.65 00:25:51.848 { 00:25:51.848 "results": [ 00:25:51.848 { 00:25:51.848 "job": "nvme0n1", 00:25:51.848 "core_mask": "0x2", 00:25:51.848 "workload": "randwrite", 00:25:51.848 "status": "finished", 00:25:51.848 "queue_depth": 128, 00:25:51.848 "io_size": 4096, 00:25:51.848 "runtime": 2.005498, 00:25:51.848 "iops": 30728.527278511374, 00:25:51.848 "mibps": 120.03330968168505, 00:25:51.848 "io_failed": 0, 00:25:51.848 "io_timeout": 0, 00:25:51.848 "avg_latency_us": 4159.968345828059, 00:25:51.848 "min_latency_us": 2088.96, 00:25:51.848 "max_latency_us": 9229.653333333334 00:25:51.848 } 00:25:51.848 ], 00:25:51.848 "core_count": 1 00:25:51.848 } 00:25:51.848 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:51.848 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:51.848 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:51.848 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:51.848 | select(.opcode=="crc32c") 00:25:51.848 | "\(.module_name) \(.executed)"' 00:25:51.848 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:52.107 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3198261 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3198261 ']' 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3198261 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198261 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198261' 00:25:52.108 killing process with pid 3198261 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3198261 00:25:52.108 Received shutdown signal, test time was about 2.000000 seconds 00:25:52.108 00:25:52.108 Latency(us) 00:25:52.108 [2024-12-06T17:02:39.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.108 [2024-12-06T17:02:39.935Z] =================================================================================================================== 00:25:52.108 [2024-12-06T17:02:39.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3198261 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3198935 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3198935 /var/tmp/bperf.sock 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3198935 ']' 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:52.108 18:02:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:52.367 [2024-12-06 18:02:39.949046] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:52.367 [2024-12-06 18:02:39.949105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198935 ] 00:25:52.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.367 Zero copy mechanism will not be used. 00:25:52.367 [2024-12-06 18:02:40.014210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.367 [2024-12-06 18:02:40.043450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.367 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.367 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:52.367 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:52.367 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:52.367 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:52.627 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.627 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.887 nvme0n1 00:25:52.887 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:52.887 18:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.887 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.887 Zero copy mechanism will not be used. 00:25:52.887 Running I/O for 2 seconds... 00:25:55.205 3847.00 IOPS, 480.88 MiB/s [2024-12-06T17:02:43.032Z] 4842.00 IOPS, 605.25 MiB/s 00:25:55.205 Latency(us) 00:25:55.205 [2024-12-06T17:02:43.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.205 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:55.205 nvme0n1 : 2.00 4840.12 605.02 0.00 0.00 3300.62 1174.19 13598.72 00:25:55.205 [2024-12-06T17:02:43.032Z] =================================================================================================================== 00:25:55.205 [2024-12-06T17:02:43.032Z] Total : 4840.12 605.02 0.00 0.00 3300.62 1174.19 13598.72 00:25:55.205 { 00:25:55.205 "results": [ 00:25:55.205 { 00:25:55.205 "job": "nvme0n1", 00:25:55.205 "core_mask": "0x2", 00:25:55.205 "workload": "randwrite", 00:25:55.205 "status": "finished", 00:25:55.205 "queue_depth": 16, 00:25:55.205 "io_size": 131072, 00:25:55.205 "runtime": 2.004081, 00:25:55.205 "iops": 4840.123727533967, 00:25:55.205 "mibps": 605.0154659417459, 00:25:55.205 "io_failed": 0, 00:25:55.205 "io_timeout": 0, 00:25:55.205 "avg_latency_us": 3300.6211958762888, 00:25:55.205 "min_latency_us": 1174.1866666666667, 00:25:55.205 "max_latency_us": 13598.72 00:25:55.205 } 00:25:55.205 ], 00:25:55.205 "core_count": 1 00:25:55.205 } 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:55.205 | select(.opcode=="crc32c") 00:25:55.205 | "\(.module_name) \(.executed)"' 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3198935 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3198935 ']' 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3198935 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198935 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198935' 00:25:55.205 killing process with pid 3198935 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3198935 00:25:55.205 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.205 00:25:55.205 Latency(us) 00:25:55.205 [2024-12-06T17:02:43.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.205 [2024-12-06T17:02:43.032Z] =================================================================================================================== 00:25:55.205 [2024-12-06T17:02:43.032Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.205 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3198935 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3196552 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3196552 ']' 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3196552 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196552 00:25:55.206 18:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:55.206 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:55.206 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196552' 00:25:55.206 killing process with pid 3196552 00:25:55.206 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3196552 00:25:55.206 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3196552 00:25:55.464 00:25:55.464 real 0m14.174s 00:25:55.464 user 0m27.550s 00:25:55.464 sys 0m3.069s 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 ************************************ 00:25:55.464 END TEST nvmf_digest_clean 00:25:55.464 ************************************ 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 ************************************ 00:25:55.464 START TEST nvmf_digest_error 00:25:55.464 ************************************ 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3199668 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3199668 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3199668 ']' 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.464 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.464 [2024-12-06 18:02:43.193508] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:55.464 [2024-12-06 18:02:43.193557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.464 [2024-12-06 18:02:43.266449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.724 [2024-12-06 18:02:43.296260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.724 [2024-12-06 18:02:43.296289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.724 [2024-12-06 18:02:43.296297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.724 [2024-12-06 18:02:43.296302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.724 [2024-12-06 18:02:43.296306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.724 [2024-12-06 18:02:43.296770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.724 [2024-12-06 18:02:43.345111] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.724 null0 00:25:55.724 [2024-12-06 18:02:43.420478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.724 [2024-12-06 18:02:43.444698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3199875 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3199875 /var/tmp/bperf.sock 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3199875 ']' 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.724 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:55.724 [2024-12-06 18:02:43.483332] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:55.724 [2024-12-06 18:02:43.483380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199875 ] 00:25:55.724 [2024-12-06 18:02:43.547519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.981 [2024-12-06 18:02:43.577593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.981 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.981 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:55.982 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.982 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.982 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:55.982 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.982 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.240 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.240 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.240 18:02:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.240 nvme0n1 00:25:56.500 18:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:56.500 18:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.500 18:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.500 18:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.500 18:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:56.500 18:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.500 Running I/O for 2 seconds... 00:25:56.500 [2024-12-06 18:02:44.177160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.177193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.177202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.186475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.186495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.186504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.195023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.195042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.195050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.204140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.204159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.204166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.213903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.213922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.213929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.222271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.222288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.222295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.233033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.233051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.233057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.241536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.241554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.241561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.253042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.253060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.253067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.263577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.263595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.263602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.273403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.273421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.273435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.280986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.281004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.281010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.291504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.291521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.291528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.301416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.301434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.301440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.309902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.309919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.309926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-12-06 18:02:44.319738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.500 [2024-12-06 18:02:44.319756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-12-06 18:02:44.319763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.327021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.327038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.327045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.337475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.337493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.337500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.347895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.347912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.347919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.356808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.356825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.356832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.364939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.364957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.364964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.374250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.374267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.383219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.759 [2024-12-06 18:02:44.383237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.759 [2024-12-06 18:02:44.383243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.759 [2024-12-06 18:02:44.393003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.393028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.401114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.401132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.401138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.410519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.410537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.410543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.420167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.420184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.420190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.429078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.429096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.429110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.437001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.437019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.437025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.448177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.448195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.448201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.458137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.458154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.458160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.466692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.466710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.466717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.474713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.474731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.474737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.483824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.483841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.483848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.492755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.492773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.492780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.501679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.501696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.501703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.511418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.511439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.511446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.519829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.519847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.519854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.528965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.528982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.528989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.537432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.537449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.537456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.546450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.546468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.546475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.556820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.556837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.556844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.565963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.565980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.565987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.574166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.574189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-12-06 18:02:44.583718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:56.760 [2024-12-06 18:02:44.583735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-12-06 18:02:44.583742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.020 [2024-12-06 18:02:44.592065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.020 [2024-12-06 18:02:44.592083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.020 [2024-12-06 18:02:44.592089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.020 [2024-12-06 18:02:44.600737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.020 [2024-12-06 18:02:44.600754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.020 [2024-12-06 18:02:44.600761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.020 [2024-12-06 18:02:44.610077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.020 [2024-12-06 18:02:44.610095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.020 [2024-12-06 18:02:44.610105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.020 [2024-12-06 18:02:44.620094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.020 [2024-12-06 18:02:44.620115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.020 [2024-12-06 18:02:44.620122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.020 [2024-12-06 18:02:44.628482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.020 [2024-12-06 18:02:44.628499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.628505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.637309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.637327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.637333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.646588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.646607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.646614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.654946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.654964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.654970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.663672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.663690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.663700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.672450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.672467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.672473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.682149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.682167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.682173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.690408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.690425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.690431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.699222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.699240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.699247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.708415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.708433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.708440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.716766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.716784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.716791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.725615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.725633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.725640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.734602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.734620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.734627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.743129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.743146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.743153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.752246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.752265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.752271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.762415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.762433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.762440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.771470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.771488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.771494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.779929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.779947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.779954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.788786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.788804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.788811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.798282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.798300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.798307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.806425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.806443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.806449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.814838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.814855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.814865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.824483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.824500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.824507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.833248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.833266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.833273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.021 [2024-12-06 18:02:44.841384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.021 [2024-12-06 18:02:44.841401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.021 [2024-12-06 18:02:44.841408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.850582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.850600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.850608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.859266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.859284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.859291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.867907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.867925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.876183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.876201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.885459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.885476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.885483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.894813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.894834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.894841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.903060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.903078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.903085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.914377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.914394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.914401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.925708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.925726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.925732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.937065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.282 [2024-12-06 18:02:44.937082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.282 [2024-12-06 18:02:44.937089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.282 [2024-12-06 18:02:44.944841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:44.944858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:44.944865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:44.956739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:44.956757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:44.956764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:44.968116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:44.968134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:44.968140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:44.978896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:44.978915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:44.978921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:44.990576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:44.990593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:44.990600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:44.998691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:44.998708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:44.998715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.007805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.007823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.007830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.017417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.017434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.017441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.026923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.026941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.026948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.036311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.036329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.036335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.043966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.043983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.043990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.055684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.055702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.055709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.066059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.066078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.066087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.074211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.074228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.074236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.083037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.083054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.083061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.093560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.093578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.093585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-12-06 18:02:45.105531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.283 [2024-12-06 18:02:45.105549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-12-06 18:02:45.105556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.116055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.116073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.116080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.127498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.127516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.127523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.139113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.139131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.139138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.146592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.146616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.157789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.157810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.157816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 27113.00 IOPS, 105.91 MiB/s [2024-12-06T17:02:45.370Z] [2024-12-06 18:02:45.169185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.169203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.169210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.177786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.177804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.177810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.186496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.186514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.186521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.195581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.195598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.195605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.203998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.204017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.204023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.213430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.213448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.213454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.221039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.221056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.221063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.231775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.231802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.241040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.241057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.241063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.250755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.250772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.250778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.259435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.259453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.259459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.269792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.269809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.269815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.279531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.279548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.279555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.288769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.288787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.288793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.298107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.298124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.298131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.306431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.543 [2024-12-06 18:02:45.306448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.543 [2024-12-06 18:02:45.306454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.543 [2024-12-06 18:02:45.315640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.544 [2024-12-06 18:02:45.315660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.544 [2024-12-06 18:02:45.315667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.544 [2024-12-06 18:02:45.323640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.544 [2024-12-06 18:02:45.323657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.544 [2024-12-06 18:02:45.323664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.544 [2024-12-06 18:02:45.334304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.544 [2024-12-06 18:02:45.334322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.544 [2024-12-06 18:02:45.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.544 [2024-12-06 18:02:45.345764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.544 [2024-12-06 18:02:45.345782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.544 [2024-12-06 18:02:45.345788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.544 [2024-12-06 18:02:45.354198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.544 [2024-12-06 18:02:45.354215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.544 [2024-12-06 18:02:45.354222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.544 [2024-12-06 18:02:45.365553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.544 [2024-12-06 18:02:45.365571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.544 [2024-12-06 18:02:45.365577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.375800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.375819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.375825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.386325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.386343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.386349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.394816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.394833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.394840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.403801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.403818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.403824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.411413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.411430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.421511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.421529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.421535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.803 [2024-12-06 18:02:45.432463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.803 [2024-12-06 18:02:45.432480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.803 [2024-12-06 18:02:45.432487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.443909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.443926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.443932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.454857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.454876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.454882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.465880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.465897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.465904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.474422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.474439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.474445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.483672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.483689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.483702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.492298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.492316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.492322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.501548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.501566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.501573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.509418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.509435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.509442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.519182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.519200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.519207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.528217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.528234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.528241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.536995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.537013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.537020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.546755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.546772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.546779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.555291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.555309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.555315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.564987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.565004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.565011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.572398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.572415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.572421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.583160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.583178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.583184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.594821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.594839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.594846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.606018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.606035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.606042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.613558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.613575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.613581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.804 [2024-12-06 18:02:45.623508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:57.804 [2024-12-06 18:02:45.623525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.804 [2024-12-06 18:02:45.623532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.085 [2024-12-06 18:02:45.634655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.085 [2024-12-06 18:02:45.634673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.085 [2024-12-06 18:02:45.634680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.085 [2024-12-06 18:02:45.643402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.085 [2024-12-06 18:02:45.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.085 [2024-12-06 18:02:45.643429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.085 [2024-12-06 18:02:45.653157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.085 [2024-12-06 18:02:45.653174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.085 [2024-12-06 18:02:45.653181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.085 [2024-12-06 18:02:45.664409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.085 [2024-12-06 18:02:45.664426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.085 [2024-12-06 18:02:45.664433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.085 [2024-12-06 18:02:45.674823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.674841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.674847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.683502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.683521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.683527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.692104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.692122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.692129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.700660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.700678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.700685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.709779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.709796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.709803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.718497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.718515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.718521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.727606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.727626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.727633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.737145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.737163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.737169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.744686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.744703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.744710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.753557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.753576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.753582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.762753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.762770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.762777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.771214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.771231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.771238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.780908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.780925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.780932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.789853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.789877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.798697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.798715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.798721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.809379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.809396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.809403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.818185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.818202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.818209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.827717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.827734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.827741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.835274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.835291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.835297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.845504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.845521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.845528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.856502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.856520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.866165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.866182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.866188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.876051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.876069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.876075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.884940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.884957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.884967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.086 [2024-12-06 18:02:45.892907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.086 [2024-12-06 18:02:45.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.086 [2024-12-06 18:02:45.892931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.902162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.902179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.902186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.912769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.912787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.920788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.920806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.920813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.929350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.929368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.929374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.938478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.938495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.938502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.947182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.947200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.947206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.957975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.957993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.957999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.967326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.967344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.967350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.976289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.976306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.976312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.984890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.984907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.984914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:45.994517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:45.994534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:45.994541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.002783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:46.002800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:46.002806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.011396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:46.011413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:46.011419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.020437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:46.020455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:46.020462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.031616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:46.031633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:46.031640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.043577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:46.043595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:46.043604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.055013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.394 [2024-12-06 18:02:46.055031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.394 [2024-12-06 18:02:46.055037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.394 [2024-12-06 18:02:46.064974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.064991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.064998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.072496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.072513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.072519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.081868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.081885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.081892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.091294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.091312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.091318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.099935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.099952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.099959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.108782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.108799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.108805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.117820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.117837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.117844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.126505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.126524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.126531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.135189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.135207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.135213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.143868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.143886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.143893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.152769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.152787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.152793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 [2024-12-06 18:02:46.162051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13db540) 00:25:58.395 [2024-12-06 18:02:46.162069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.395 [2024-12-06 18:02:46.162076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.395 27191.00 IOPS, 106.21 MiB/s 00:25:58.395 Latency(us) 00:25:58.395 [2024-12-06T17:02:46.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.395 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:58.395 nvme0n1 : 2.00 27207.90 106.28 0.00 0.00 4699.99 2211.84 15182.51 00:25:58.395 [2024-12-06T17:02:46.222Z] =================================================================================================================== 00:25:58.395 [2024-12-06T17:02:46.222Z] Total : 27207.90 106.28 0.00 0.00 4699.99 2211.84 15182.51 00:25:58.395 { 00:25:58.395 "results": [ 00:25:58.395 { 00:25:58.395 "job": "nvme0n1", 00:25:58.395 "core_mask": "0x2", 00:25:58.395 "workload": "randread", 00:25:58.395 "status": "finished", 00:25:58.395 "queue_depth": 128, 00:25:58.395 "io_size": 4096, 00:25:58.395 "runtime": 2.003462, 00:25:58.395 "iops": 27207.9031196998, 00:25:58.395 "mibps": 106.28087156132734, 00:25:58.395 "io_failed": 0, 00:25:58.395 "io_timeout": 0, 00:25:58.395 "avg_latency_us": 4699.986672047943, 00:25:58.395 "min_latency_us": 2211.84, 00:25:58.395 "max_latency_us": 15182.506666666666 00:25:58.395 } 00:25:58.395 ], 00:25:58.395 "core_count": 1 00:25:58.395 } 00:25:58.395 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:58.395 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:58.395 | .driver_specific 00:25:58.395 | .nvme_error 00:25:58.395 | .status_code 00:25:58.395 | .command_transient_transport_error' 00:25:58.395 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:58.395 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:58.707 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:25:58.707 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3199875 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3199875 ']' 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3199875 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199875 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199875' 00:25:58.708 killing process with pid 3199875 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3199875 00:25:58.708 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.708 00:25:58.708 Latency(us) 00:25:58.708 [2024-12-06T17:02:46.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.708 [2024-12-06T17:02:46.535Z] =================================================================================================================== 00:25:58.708 [2024-12-06T17:02:46.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3199875 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3200534 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3200534 /var/tmp/bperf.sock 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3200534 ']' 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:58.708 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:58.708 [2024-12-06 18:02:46.525006] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:25:58.708 [2024-12-06 18:02:46.525062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200534 ] 00:25:58.708 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.708 Zero copy mechanism will not be used. 00:25:58.968 [2024-12-06 18:02:46.589741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.968 [2024-12-06 18:02:46.619416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.968 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.968 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:58.968 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:58.968 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.227 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:59.227 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.227 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.227 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.227 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.227 18:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.487 nvme0n1 00:25:59.487 18:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:59.487 18:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.487 18:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.487 18:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.487 18:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:59.487 18:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:59.487 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.487 Zero copy mechanism will not be used. 00:25:59.487 Running I/O for 2 seconds... 00:25:59.487 [2024-12-06 18:02:47.280063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.487 [2024-12-06 18:02:47.280095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.487 [2024-12-06 18:02:47.280111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.487 [2024-12-06 18:02:47.284755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.487 [2024-12-06 18:02:47.284781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.487 [2024-12-06 18:02:47.284789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.487 [2024-12-06 18:02:47.290822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.487 [2024-12-06 18:02:47.290845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.487 [2024-12-06 18:02:47.290852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.487 [2024-12-06 18:02:47.298082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.487 [2024-12-06 18:02:47.298110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.487 [2024-12-06 18:02:47.298117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.487 [2024-12-06 18:02:47.303381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.487 [2024-12-06 18:02:47.303402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.487 [2024-12-06 18:02:47.303409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.487 [2024-12-06 18:02:47.309998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.487 [2024-12-06 18:02:47.310019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.487 [2024-12-06 18:02:47.310026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.747 [2024-12-06 18:02:47.315610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.315631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.315637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.324433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.324454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.324461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.332071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.332092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.332099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.337083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.337106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.337113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.344776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.344796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.344803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.351975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.351995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.352008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.358451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.358470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.358477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.367284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.367305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.367312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.376279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.376301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.376308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.384639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.384659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.384666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.393760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.393780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.393787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.398211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.398232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.398238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.402824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.402844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.402850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.407553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.407573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.407579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.411490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.411510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.411517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.415648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.415668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.415675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.421238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.421258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.421264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.425866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.425886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.425892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.430056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.430076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.430084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.435036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.435057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.435065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.439026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.439046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.439053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.443072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.443092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.443104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.446595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.446615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.446627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.450962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.450982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.450989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.456084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.456112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.456119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.462704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.462724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.748 [2024-12-06 18:02:47.462731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.748 [2024-12-06 18:02:47.467793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.748 [2024-12-06 18:02:47.467815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.467822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.472701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.472721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.472727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.477300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.477319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.477326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.483496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.483515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.483522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.487243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.487262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.487269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.491325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.491349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.491356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.496664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.496684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.496690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.500432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.500451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.500461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.504351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.504376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.508590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.508609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.508616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.513924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.513944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.513953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.522739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.522759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.522766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.530009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.530029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.530036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.535937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.535957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.535964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.539545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.539565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.539571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.543732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.543752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.543759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.552238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.552258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.552265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.560022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.560041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.560048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:59.749 [2024-12-06 18:02:47.567402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:25:59.749 [2024-12-06 18:02:47.567424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.749 [2024-12-06 18:02:47.567433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.009 [2024-12-06 18:02:47.575061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.009 [2024-12-06 18:02:47.575080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.009 [2024-12-06 18:02:47.575087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.009 [2024-12-06 18:02:47.582821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.009 [2024-12-06 18:02:47.582841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.009 [2024-12-06 18:02:47.582848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.009 [2024-12-06 18:02:47.589030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.009 [2024-12-06 18:02:47.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.009 [2024-12-06 18:02:47.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.009 [2024-12-06 18:02:47.597298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.597317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.597327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.606481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.606501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.606508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.615436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.615456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.615465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.622546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.622566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.622572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.628878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.628898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.628905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.635671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.635693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.643430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.643451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.643457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.651837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.651857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.651863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.659368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.659387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.659394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.662834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.662859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.662870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.665927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.665947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.665954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.673830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.673849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.673856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.682613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.682633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.682640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.689052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.689073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.689079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.696126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.696146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.696153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.700314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.700334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.700342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.705301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.705320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.705327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.713519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.713540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.713546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.718191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.718210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.718217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.722467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.722487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.722494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.726682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.726702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.726709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.730473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.730494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.730501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.738467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.738487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.738494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.747256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.747277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.747283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.754386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.754405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.754412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.762177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.762197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.010 [2024-12-06 18:02:47.762204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.010 [2024-12-06 18:02:47.770486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.010 [2024-12-06 18:02:47.770507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.770518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.779785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.779806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.779813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.787628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.787649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.787656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.794596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.794616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.794623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.801455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.801476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.801482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.807042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.807062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.815021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.815042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.815048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.823521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.823541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.823548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.828476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.828496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.828503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.011 [2024-12-06 18:02:47.832485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.011 [2024-12-06 18:02:47.832505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.011 [2024-12-06 18:02:47.832513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.271 [2024-12-06 18:02:47.838024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.271 [2024-12-06 18:02:47.838047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.271 [2024-12-06 18:02:47.838054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.271 [2024-12-06 18:02:47.844420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.271 [2024-12-06 18:02:47.844440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.271 [2024-12-06 18:02:47.844447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.271 [2024-12-06 18:02:47.853894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.271 [2024-12-06 18:02:47.853914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.271 [2024-12-06 18:02:47.853921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.271 [2024-12-06 18:02:47.863646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.271 [2024-12-06 18:02:47.863667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.271 [2024-12-06 18:02:47.863674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.271 [2024-12-06 18:02:47.869490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.869510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.869517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.873947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.873968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.873975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.879411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.879430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.879437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.887654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.887674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.887684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.897299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.897319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.897326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.906601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.906622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.906629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.913966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.913987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.913994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.922476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.922496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.922502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.931470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.931490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.931497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.938664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.938685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.938691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.948998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.949018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.949024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.957854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.957873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.957880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.966746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.966769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.966777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.973503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.973523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.973530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.979631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.979651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.979658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:47.989559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:47.989579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:47.989586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.000063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.000083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.000090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.010561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.010581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.010588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.021202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.021223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.021230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.031629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.031648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.031655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.040511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.040535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.040542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.044965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.044986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.044993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.049597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.049617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.049623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.053902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.053925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.053933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.056321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.056339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.056346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.060395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.060413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.272 [2024-12-06 18:02:48.060420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.272 [2024-12-06 18:02:48.066248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.272 [2024-12-06 18:02:48.066266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-12-06 18:02:48.066273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.273 [2024-12-06 18:02:48.077344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.273 [2024-12-06 18:02:48.077364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-12-06 18:02:48.077370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.273 [2024-12-06 18:02:48.086191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.273 [2024-12-06 18:02:48.086210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-12-06 18:02:48.086217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.273 [2024-12-06 18:02:48.092726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.273 [2024-12-06 18:02:48.092746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.273 [2024-12-06 18:02:48.092757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.098350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.098370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.098377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.105040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.105060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.105070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.111091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.111117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.111127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.116800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.116819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.116825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.124574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.124593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.124599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.131963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.131981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.131988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.141360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.141379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.141385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.151416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.151434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.151441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.158964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.158986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.158993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.168879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.168898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.168904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.179400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.179419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.179426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.185975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.185998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.186010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.192447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.192466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.192474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.200915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.200934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.200941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.209980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.209999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.210006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.219029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.219047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.219054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.227122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.227153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.235792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.235811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.235818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.242501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.242519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.242525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.252219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.252238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.252245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.262277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.262296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.262303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.268967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.268985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.268995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.534 [2024-12-06 18:02:48.273262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.273281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.273288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.534 4528.00 IOPS, 566.00 MiB/s [2024-12-06T17:02:48.361Z] [2024-12-06 18:02:48.282663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.534 [2024-12-06 18:02:48.282683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.534 [2024-12-06 18:02:48.282690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.289369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.289387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.289394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.299044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.299067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.299074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.308609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.308629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.308637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.318280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.318299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.318306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.327150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.327169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.327176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.335130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.335149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.335155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.342356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.342375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.349969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.349991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.349999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.535 [2024-12-06 18:02:48.356020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.535 [2024-12-06 18:02:48.356039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.535 [2024-12-06 18:02:48.356046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.363170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.363189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.363197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.371213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.371232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.371239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.378981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.379000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.379007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.386218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.386236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.386244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.391383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.391401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.391409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.395683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.395705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.395712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.400527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.400546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.400553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.404516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.404535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.404542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.410691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.410709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.410717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.415275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.415293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.415304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.422131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.422150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.422157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.427603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.427622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.427629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.434003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.434025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.434032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.438661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.438680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.438688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.445855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.445873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.445880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.450180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.450199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.450206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.457117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.457135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.457143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.466805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.466824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.466832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.475273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.475295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.475303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.484679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.484698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.484706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.492310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.492329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.500730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.500749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.500757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.795 [2024-12-06 18:02:48.508979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.795 [2024-12-06 18:02:48.509000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.795 [2024-12-06 18:02:48.509011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.516376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.516395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.516402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.524645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.524664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.524670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.531575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.531595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.531603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.537840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.537861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.537867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.543089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.543117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.543124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.549278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.549298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.549305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.555440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.555459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.555466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.564956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.564976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.564983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.574672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.574693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.574699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.584183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.584203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.584210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.593753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.593773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.593779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.603444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.603465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.603472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.796 [2024-12-06 18:02:48.612504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:00.796 [2024-12-06 18:02:48.612525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.796 [2024-12-06 18:02:48.612535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.621860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.621880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.621887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.631640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.631660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.631666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.640720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.640740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.640746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.650055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.650074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.650082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.660645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.660666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.660672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.670867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.670888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.670894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.681088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.681114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.681123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.685256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.685275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.685282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.691576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.691596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.691603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.056 [2024-12-06 18:02:48.697259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.056 [2024-12-06 18:02:48.697279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.056 [2024-12-06 18:02:48.697285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.702620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.702640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.702647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.708561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.708580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.708587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.714620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.714642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.714649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.721694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.721714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.721721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.727204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.727225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.727231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.732337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.732361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.732368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.738511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.738530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.738541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.748792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.748811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.748817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.758318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.758338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.758345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.767331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.767351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.767358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.776682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.776701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.776708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.783414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.783433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.783440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.790748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.790768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.790775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.798341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.798363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.798370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.806765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.806785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.806791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.813022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.813045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.813052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.819157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.819176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.819182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.825340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.825362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.825371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.830893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.830913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.830919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.836286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.836306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.836313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.843122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.843141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.843147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.851528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.851547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.851553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.858450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.858473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.858481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.865871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.865893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.865899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.057 [2024-12-06 18:02:48.874045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.057 [2024-12-06 18:02:48.874064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.057 [2024-12-06 18:02:48.874071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.318 [2024-12-06 18:02:48.882841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.318 [2024-12-06 18:02:48.882861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.318 [2024-12-06 18:02:48.882868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.318 [2024-12-06 18:02:48.889401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.318 [2024-12-06 18:02:48.889419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.318 [2024-12-06 18:02:48.889426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.318 [2024-12-06 18:02:48.897396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.318 [2024-12-06 18:02:48.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.318 [2024-12-06 18:02:48.897423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.318 [2024-12-06 18:02:48.901851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.318 [2024-12-06 18:02:48.901875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.901884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.910232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.910251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.910258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.918200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.918218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.918225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.924782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.924800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.924807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.933893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.933913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.933924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.943256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.943276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.943283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.953823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.953843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.953850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.963851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.963875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.963882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.973092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.973118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.973124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.982084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.982110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.982117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:48.992695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:48.992714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:48.992721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.003380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.003403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.003410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.013824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.013844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.013850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.024561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.024585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.024591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.035287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.035307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.035314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.045640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.045660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.045667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.056222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.056243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.056250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.066549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.066570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.066580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.077058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.077078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.077084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.087537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.087558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.087565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.096896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.096918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.096927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.319 [2024-12-06 18:02:49.106934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.319 [2024-12-06 18:02:49.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.319 [2024-12-06 18:02:49.106965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.320 [2024-12-06 18:02:49.117047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.320 [2024-12-06 18:02:49.117067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.320 [2024-12-06 18:02:49.117073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.320 [2024-12-06 18:02:49.125958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.320 [2024-12-06 18:02:49.125979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.320 [2024-12-06 18:02:49.125988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.320 [2024-12-06 18:02:49.133507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.320 [2024-12-06 18:02:49.133527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.320 [2024-12-06 18:02:49.133534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.320 [2024-12-06 18:02:49.141946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.320 [2024-12-06 18:02:49.141966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.320 [2024-12-06 18:02:49.141973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.147168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.147188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.147195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.156839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.156860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.156867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.166784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.166805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.166812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.176922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.176943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.176950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.187227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.187253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.187260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.197306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.197329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.197336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.207892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.207912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.207920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.218840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.218860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.218866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.580 [2024-12-06 18:02:49.229901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.580 [2024-12-06 18:02:49.229922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.580 [2024-12-06 18:02:49.229929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.581 [2024-12-06 18:02:49.240730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.581 [2024-12-06 18:02:49.240751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.581 [2024-12-06 18:02:49.240758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.581 [2024-12-06 18:02:49.251930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.581 [2024-12-06 18:02:49.251951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.581 [2024-12-06 18:02:49.251958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.581 [2024-12-06 18:02:49.262388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.581 [2024-12-06 18:02:49.262412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.581 [2024-12-06 18:02:49.262419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:01.581 [2024-12-06 18:02:49.273799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1939650) 00:26:01.581 [2024-12-06 18:02:49.273818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.581 [2024-12-06 18:02:49.273825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.581 4173.00 IOPS, 521.62 MiB/s 00:26:01.581 Latency(us) 00:26:01.581 [2024-12-06T17:02:49.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.581 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:01.581 nvme0n1 : 2.00 4175.87 521.98 0.00 0.00 3828.87 491.52 12178.77 00:26:01.581 [2024-12-06T17:02:49.408Z] =================================================================================================================== 00:26:01.581 [2024-12-06T17:02:49.408Z] Total : 4175.87 521.98 0.00 0.00 3828.87 491.52 12178.77 00:26:01.581 { 00:26:01.581 "results": [ 00:26:01.581 { 00:26:01.581 "job": "nvme0n1", 00:26:01.581 "core_mask": "0x2", 00:26:01.581 "workload": "randread", 00:26:01.581 "status": "finished", 00:26:01.581 "queue_depth": 16, 00:26:01.581 "io_size": 131072, 00:26:01.581 "runtime": 2.002459, 00:26:01.581 "iops": 4175.865773032057, 00:26:01.581 "mibps": 521.9832216290072, 00:26:01.581 "io_failed": 0, 00:26:01.581 "io_timeout": 0, 00:26:01.581 "avg_latency_us": 3828.8668675755393, 00:26:01.581 "min_latency_us": 491.52, 00:26:01.581 "max_latency_us": 12178.773333333333 00:26:01.581 } 00:26:01.581 ], 00:26:01.581 "core_count": 1 00:26:01.581 } 00:26:01.581 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:01.581 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:01.581 | .driver_specific 00:26:01.581 | .nvme_error 00:26:01.581 | .status_code 00:26:01.581 | .command_transient_transport_error' 00:26:01.581 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:01.581 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 270 > 0 )) 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3200534 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3200534 ']' 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3200534 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3200534 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3200534' 00:26:01.842 killing process with pid 3200534 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3200534 00:26:01.842 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.842 00:26:01.842 Latency(us) 00:26:01.842 [2024-12-06T17:02:49.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.842 [2024-12-06T17:02:49.669Z] =================================================================================================================== 00:26:01.842 [2024-12-06T17:02:49.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3200534 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3201258 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3201258 /var/tmp/bperf.sock 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3201258 ']' 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:01.842 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:01.842 [2024-12-06 18:02:49.641542] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:01.842 [2024-12-06 18:02:49.641600] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201258 ] 00:26:02.101 [2024-12-06 18:02:49.704987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.101 [2024-12-06 18:02:49.734543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.101 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.101 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:02.101 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.101 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.362 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:02.362 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.362 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.362 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.362 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.362 18:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.362 nvme0n1 00:26:02.362 18:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:02.362 18:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.362 18:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.621 18:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.621 18:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:02.621 18:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.621 Running I/O for 2 seconds... 00:26:02.621 [2024-12-06 18:02:50.277722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edece0 00:26:02.621 [2024-12-06 18:02:50.278772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.621 [2024-12-06 18:02:50.278799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.621 [2024-12-06 18:02:50.285684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edf118 00:26:02.621 [2024-12-06 18:02:50.286583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.621 [2024-12-06 18:02:50.286600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:02.621 [2024-12-06 18:02:50.293645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4b08 00:26:02.621 [2024-12-06 18:02:50.294396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.621 [2024-12-06 18:02:50.294414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:02.621 [2024-12-06 18:02:50.302506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.621 [2024-12-06 18:02:50.303298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.303315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.310927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.311729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.311745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.319372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.320163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.320180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.327769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.328552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.328568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.336171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.336959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.336975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.344582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.345364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.345381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.352987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.353772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.353789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.361393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.362198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.362215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.369774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.370567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.370584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.378148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.378928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.378945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.386520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.387315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.387332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.394899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.395684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.395700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.403290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.404075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.404091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.411670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.412454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.412476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.420034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.420823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.420839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.428390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.429162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.429178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.436784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.437567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.437583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.622 [2024-12-06 18:02:50.445173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.622 [2024-12-06 18:02:50.445952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.622 [2024-12-06 18:02:50.445968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.881 [2024-12-06 18:02:50.453553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.881 [2024-12-06 18:02:50.454348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.454365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.461921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.462724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.462740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.470290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.471087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.471106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.478665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.479455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.479471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.487047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.487834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.487853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.495424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.496208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.496224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.503817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.504599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.504615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.512171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.512952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.512969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.520527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.521327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.521342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.528908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.529715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.537292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.538078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.538094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.545669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.546448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.546465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.554034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:02.882 [2024-12-06 18:02:50.554828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.554845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.562983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee4de8 00:26:02.882 [2024-12-06 18:02:50.564001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.570800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eedd58 00:26:02.882 [2024-12-06 18:02:50.571468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.571485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.579119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeee38 00:26:02.882 [2024-12-06 18:02:50.579777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.579793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.587498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9e10 00:26:02.882 [2024-12-06 18:02:50.588150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.588166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.595876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee8d30 00:26:02.882 [2024-12-06 18:02:50.596537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.596554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.604250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee7c50 00:26:02.882 [2024-12-06 18:02:50.604933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.604950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.612612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eefae0 00:26:02.882 [2024-12-06 18:02:50.613275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.613291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.620993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:02.882 [2024-12-06 18:02:50.621663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.621679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.629366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edece0 00:26:02.882 [2024-12-06 18:02:50.630007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.630023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.637744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee5658 00:26:02.882 [2024-12-06 18:02:50.638406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.646099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee6738 00:26:02.882 [2024-12-06 18:02:50.646763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.646779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.654455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef7100 00:26:02.882 [2024-12-06 18:02:50.655125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.655141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.662832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef3e60 00:26:02.882 [2024-12-06 18:02:50.663475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.663491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.671199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef2d80 00:26:02.882 [2024-12-06 18:02:50.671865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.882 [2024-12-06 18:02:50.671881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.882 [2024-12-06 18:02:50.679570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef1ca0 00:26:02.882 [2024-12-06 18:02:50.680229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.883 [2024-12-06 18:02:50.680246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.883 [2024-12-06 18:02:50.687956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0bc0 00:26:02.883 [2024-12-06 18:02:50.688626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.883 [2024-12-06 18:02:50.688643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.883 [2024-12-06 18:02:50.696319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eebfd0 00:26:02.883 [2024-12-06 18:02:50.696940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.883 [2024-12-06 18:02:50.696956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:02.883 [2024-12-06 18:02:50.704719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed0b0 00:26:02.883 [2024-12-06 18:02:50.705378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.883 [2024-12-06 18:02:50.705398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.713103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eee190 00:26:03.143 [2024-12-06 18:02:50.713757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.713773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.721476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efbcf0 00:26:03.143 [2024-12-06 18:02:50.722132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.722149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.729854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee99d8 00:26:03.143 [2024-12-06 18:02:50.730518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.730533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.738217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee88f8 00:26:03.143 [2024-12-06 18:02:50.738878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.738894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.746587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee7818 00:26:03.143 [2024-12-06 18:02:50.747268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.754974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:03.143 [2024-12-06 18:02:50.755645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.755661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.763374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edf988 00:26:03.143 [2024-12-06 18:02:50.764047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.764064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.771758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee49b0 00:26:03.143 [2024-12-06 18:02:50.772422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.772438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.780138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee5a90 00:26:03.143 [2024-12-06 18:02:50.780802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.780818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.788502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee6b70 00:26:03.143 [2024-12-06 18:02:50.789185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.789201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.796871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efc998 00:26:03.143 [2024-12-06 18:02:50.797528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.797544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.805270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef3a28 00:26:03.143 [2024-12-06 18:02:50.805928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.805944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.813683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef2948 00:26:03.143 [2024-12-06 18:02:50.814331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.814348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.822072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef1868 00:26:03.143 [2024-12-06 18:02:50.822735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.822751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.829876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efb480 00:26:03.143 [2024-12-06 18:02:50.830558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.830574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.838966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edece0 00:26:03.143 [2024-12-06 18:02:50.839607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.839623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.847364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef7100 00:26:03.143 [2024-12-06 18:02:50.847999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.848016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.855759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee38d0 00:26:03.143 [2024-12-06 18:02:50.856396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.856413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.864150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0630 00:26:03.143 [2024-12-06 18:02:50.864789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.864806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.872841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efeb58 00:26:03.143 [2024-12-06 18:02:50.873469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.143 [2024-12-06 18:02:50.873486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.143 [2024-12-06 18:02:50.881085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef5378 00:26:03.143 [2024-12-06 18:02:50.881672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.881689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.889485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eebb98 00:26:03.144 [2024-12-06 18:02:50.890077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.890094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.898178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeaef0 00:26:03.144 [2024-12-06 18:02:50.898959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.898975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.908712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee5658 00:26:03.144 [2024-12-06 18:02:50.910186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.910203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.914689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0ff8 00:26:03.144 [2024-12-06 18:02:50.915379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.915396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.924471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeea00 00:26:03.144 [2024-12-06 18:02:50.925518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.925537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.932779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6458 00:26:03.144 [2024-12-06 18:02:50.933809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.933825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.941177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efdeb0 00:26:03.144 [2024-12-06 18:02:50.942248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.942264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.949559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede038 00:26:03.144 [2024-12-06 18:02:50.950607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.950624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.957985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0ea0 00:26:03.144 [2024-12-06 18:02:50.959035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.959052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.144 [2024-12-06 18:02:50.966371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9f68 00:26:03.144 [2024-12-06 18:02:50.967439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.144 [2024-12-06 18:02:50.967456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:50.975033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eefae0 00:26:03.403 [2024-12-06 18:02:50.976196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:50.976213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:50.982875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:03.403 [2024-12-06 18:02:50.983901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:50.983917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:50.991536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6020 00:26:03.403 [2024-12-06 18:02:50.992559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:50.992577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:50.999922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee12d8 00:26:03.403 [2024-12-06 18:02:51.000956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:51.000973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:51.008329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef35f0 00:26:03.403 [2024-12-06 18:02:51.009357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:51.009374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:51.016686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee6300 00:26:03.403 [2024-12-06 18:02:51.017712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:51.017729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:51.025061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee5220 00:26:03.403 [2024-12-06 18:02:51.026073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.403 [2024-12-06 18:02:51.026090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.403 [2024-12-06 18:02:51.033466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed4e8 00:26:03.404 [2024-12-06 18:02:51.034496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.034513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.041870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0ff8 00:26:03.404 [2024-12-06 18:02:51.042902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.042918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.050261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef5378 00:26:03.404 [2024-12-06 18:02:51.051299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.051316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.058637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeff18 00:26:03.404 [2024-12-06 18:02:51.059666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.059683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.067003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee1b48 00:26:03.404 [2024-12-06 18:02:51.068030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.068047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.075408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef2d80 00:26:03.404 [2024-12-06 18:02:51.076427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.076444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.083795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:03.404 [2024-12-06 18:02:51.084823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.084840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.093291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee49b0 00:26:03.404 [2024-12-06 18:02:51.094805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.094821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.099262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eebb98 00:26:03.404 [2024-12-06 18:02:51.099959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.099976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.107758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef1868 00:26:03.404 [2024-12-06 18:02:51.108484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.108501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.116133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4b08 00:26:03.404 [2024-12-06 18:02:51.116837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.116854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.124525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6cc8 00:26:03.404 [2024-12-06 18:02:51.125257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.125274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.132904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9f68 00:26:03.404 [2024-12-06 18:02:51.133609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.133626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.141300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef8e88 00:26:03.404 [2024-12-06 18:02:51.141964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.141984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.149681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef7da8 00:26:03.404 [2024-12-06 18:02:51.150388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.150405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.158039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0788 00:26:03.404 [2024-12-06 18:02:51.158749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.158766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.165882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee4de8 00:26:03.404 [2024-12-06 18:02:51.166577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.166594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.175951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:03.404 [2024-12-06 18:02:51.176973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.176989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.183821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef2948 00:26:03.404 [2024-12-06 18:02:51.184738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.184755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.192786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9e10 00:26:03.404 [2024-12-06 18:02:51.193706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.193723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.201165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeee38 00:26:03.404 [2024-12-06 18:02:51.202078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.404 [2024-12-06 18:02:51.202094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.404 [2024-12-06 18:02:51.209655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:03.404 [2024-12-06 18:02:51.210582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.405 [2024-12-06 18:02:51.210600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.405 [2024-12-06 18:02:51.218047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efd208 00:26:03.405 [2024-12-06 18:02:51.218985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.405 [2024-12-06 18:02:51.219002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.405 [2024-12-06 18:02:51.226454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee5a90 00:26:03.405 [2024-12-06 18:02:51.227392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.405 [2024-12-06 18:02:51.227410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.234854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee49b0 00:26:03.663 [2024-12-06 18:02:51.235783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.235800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.243232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee6b70 00:26:03.663 [2024-12-06 18:02:51.244139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.244155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.251602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eed920 00:26:03.663 [2024-12-06 18:02:51.252524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.252540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.260000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef5be8 00:26:03.663 [2024-12-06 18:02:51.260928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.260945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 30108.00 IOPS, 117.61 MiB/s [2024-12-06T17:02:51.490Z] [2024-12-06 18:02:51.268374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:03.663 [2024-12-06 18:02:51.269306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.269322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.276759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:03.663 [2024-12-06 18:02:51.277680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.285135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:03.663 [2024-12-06 18:02:51.286070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.286087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.293499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:03.663 [2024-12-06 18:02:51.294417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.294433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.302930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016edfdc0 00:26:03.663 [2024-12-06 18:02:51.304277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.663 [2024-12-06 18:02:51.304294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:03.663 [2024-12-06 18:02:51.310748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4298 00:26:03.664 [2024-12-06 18:02:51.311780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.311796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.319061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef46d0 00:26:03.664 [2024-12-06 18:02:51.320108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.320124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.327441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0ff8 00:26:03.664 [2024-12-06 18:02:51.328455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.328471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.335797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeaab8 00:26:03.664 [2024-12-06 18:02:51.336785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.336801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.344159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0ea0 00:26:03.664 [2024-12-06 18:02:51.345137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.345153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.352548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef96f8 00:26:03.664 [2024-12-06 18:02:51.353540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.353556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.360923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efe720 00:26:03.664 [2024-12-06 18:02:51.361944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.361963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.369307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede8a8 00:26:03.664 [2024-12-06 18:02:51.370339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.370356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.377663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee4140 00:26:03.664 [2024-12-06 18:02:51.378680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.378696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.386024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9b30 00:26:03.664 [2024-12-06 18:02:51.387063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.387079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.394404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6020 00:26:03.664 [2024-12-06 18:02:51.395436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.395453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.402797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efac10 00:26:03.664 [2024-12-06 18:02:51.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.403831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.411175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee12d8 00:26:03.664 [2024-12-06 18:02:51.412156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.412172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.419545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee8088 00:26:03.664 [2024-12-06 18:02:51.420559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.420575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.427895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:03.664 [2024-12-06 18:02:51.428916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.428932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.436273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeee38 00:26:03.664 [2024-12-06 18:02:51.437306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.437322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.444653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9e10 00:26:03.664 [2024-12-06 18:02:51.445676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.445692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.453026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef3e60 00:26:03.664 [2024-12-06 18:02:51.454044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.454060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.461403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee38d0 00:26:03.664 [2024-12-06 18:02:51.462418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.462434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.469758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4f40 00:26:03.664 [2024-12-06 18:02:51.470773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.470789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.478111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eea680 00:26:03.664 [2024-12-06 18:02:51.479151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.479168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.664 [2024-12-06 18:02:51.486493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0a68 00:26:03.664 [2024-12-06 18:02:51.487508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.664 [2024-12-06 18:02:51.487524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.923 [2024-12-06 18:02:51.494873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efeb58 00:26:03.923 [2024-12-06 18:02:51.495853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.923 [2024-12-06 18:02:51.495869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.923 [2024-12-06 18:02:51.503264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede470 00:26:03.923 [2024-12-06 18:02:51.504309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.923 [2024-12-06 18:02:51.504325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.923 [2024-12-06 18:02:51.511627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9168 00:26:03.923 [2024-12-06 18:02:51.512643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.512659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.519984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee2c28 00:26:03.924 [2024-12-06 18:02:51.521004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.521020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.528346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9f68 00:26:03.924 [2024-12-06 18:02:51.529360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.529376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.536733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efdeb0 00:26:03.924 [2024-12-06 18:02:51.537752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.537768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.545116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee1f80 00:26:03.924 [2024-12-06 18:02:51.546131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.546147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.553503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efc998 00:26:03.924 [2024-12-06 18:02:51.554524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.554540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.561855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efb048 00:26:03.924 [2024-12-06 18:02:51.562874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.562889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.570216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef57b0 00:26:03.924 [2024-12-06 18:02:51.571231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.571247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.578605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee99d8 00:26:03.924 [2024-12-06 18:02:51.579630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.579646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.586997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4298 00:26:03.924 [2024-12-06 18:02:51.588018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.588034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.595382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef46d0 00:26:03.924 [2024-12-06 18:02:51.596425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.596442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.603760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0ff8 00:26:03.924 [2024-12-06 18:02:51.604777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.604794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.612121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeaab8 00:26:03.924 [2024-12-06 18:02:51.613142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.613158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.620489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0ea0 00:26:03.924 [2024-12-06 18:02:51.621507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.621523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.628873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef96f8 00:26:03.924 [2024-12-06 18:02:51.629908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.629925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.637257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efe720 00:26:03.924 [2024-12-06 18:02:51.638280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.638295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.645645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede8a8 00:26:03.924 [2024-12-06 18:02:51.646671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.646688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.653998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee4140 00:26:03.924 [2024-12-06 18:02:51.655016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.655035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.662362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9b30 00:26:03.924 [2024-12-06 18:02:51.663418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.663434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.670744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6020 00:26:03.924 [2024-12-06 18:02:51.671767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.671783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.679127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efac10 00:26:03.924 [2024-12-06 18:02:51.680135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.680151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.687505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee12d8 00:26:03.924 [2024-12-06 18:02:51.688521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.688537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.695874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee8088 00:26:03.924 [2024-12-06 18:02:51.696899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.696915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.704248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:03.924 [2024-12-06 18:02:51.705241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.705258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.712636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeee38 00:26:03.924 [2024-12-06 18:02:51.713653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.713669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.721027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9e10 00:26:03.924 [2024-12-06 18:02:51.722047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.722064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.729534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef3e60 00:26:03.924 [2024-12-06 18:02:51.730557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.924 [2024-12-06 18:02:51.730574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.924 [2024-12-06 18:02:51.737932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee38d0 00:26:03.925 [2024-12-06 18:02:51.738952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.925 [2024-12-06 18:02:51.738968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:03.925 [2024-12-06 18:02:51.746293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4f40 00:26:03.925 [2024-12-06 18:02:51.747284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.925 [2024-12-06 18:02:51.747300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.754655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eea680 00:26:04.184 [2024-12-06 18:02:51.755670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.755687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.763050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0a68 00:26:04.184 [2024-12-06 18:02:51.764090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.764108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.771429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efeb58 00:26:04.184 [2024-12-06 18:02:51.772413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.772430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.779814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede470 00:26:04.184 [2024-12-06 18:02:51.780834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.780850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.788190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9168 00:26:04.184 [2024-12-06 18:02:51.789196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.789212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.796556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee2c28 00:26:04.184 [2024-12-06 18:02:51.797573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.797589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.804949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9f68 00:26:04.184 [2024-12-06 18:02:51.805964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.805980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.813344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efdeb0 00:26:04.184 [2024-12-06 18:02:51.814346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.814362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.821715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee1f80 00:26:04.184 [2024-12-06 18:02:51.822728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.822745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.830089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efc998 00:26:04.184 [2024-12-06 18:02:51.831108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.831124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.838454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efb048 00:26:04.184 [2024-12-06 18:02:51.839477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.839493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.846812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef57b0 00:26:04.184 [2024-12-06 18:02:51.847831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.847848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.855204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee99d8 00:26:04.184 [2024-12-06 18:02:51.856213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.856229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.863579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4298 00:26:04.184 [2024-12-06 18:02:51.864598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.864614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.871967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef46d0 00:26:04.184 [2024-12-06 18:02:51.872986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.873005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.880323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0ff8 00:26:04.184 [2024-12-06 18:02:51.881309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.881325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.888680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeaab8 00:26:04.184 [2024-12-06 18:02:51.889701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.889717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.897059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0ea0 00:26:04.184 [2024-12-06 18:02:51.898084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.898104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.905444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef96f8 00:26:04.184 [2024-12-06 18:02:51.906431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.184 [2024-12-06 18:02:51.906448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.184 [2024-12-06 18:02:51.913817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efe720 00:26:04.185 [2024-12-06 18:02:51.914838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.914854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.922197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede8a8 00:26:04.185 [2024-12-06 18:02:51.923214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.923230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.930559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee4140 00:26:04.185 [2024-12-06 18:02:51.931577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.931593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.938930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9b30 00:26:04.185 [2024-12-06 18:02:51.939911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.939927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.947321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6020 00:26:04.185 [2024-12-06 18:02:51.948338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.948354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.955703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efac10 00:26:04.185 [2024-12-06 18:02:51.956703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.956719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.964087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee12d8 00:26:04.185 [2024-12-06 18:02:51.965111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.965128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.972457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee8088 00:26:04.185 [2024-12-06 18:02:51.973437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.973453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.980808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:04.185 [2024-12-06 18:02:51.981832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.981848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.989196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeee38 00:26:04.185 [2024-12-06 18:02:51.990193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.990209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:51.997577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9e10 00:26:04.185 [2024-12-06 18:02:51.998598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:51.998614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.185 [2024-12-06 18:02:52.005978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef3e60 00:26:04.185 [2024-12-06 18:02:52.007003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.185 [2024-12-06 18:02:52.007019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.014361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee38d0 00:26:04.444 [2024-12-06 18:02:52.015376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.015393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.022715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4f40 00:26:04.444 [2024-12-06 18:02:52.023740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.023756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.031078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eea680 00:26:04.444 [2024-12-06 18:02:52.032058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.032074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.039467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0a68 00:26:04.444 [2024-12-06 18:02:52.040478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.040495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.047851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efeb58 00:26:04.444 [2024-12-06 18:02:52.048867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.048884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.056242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede470 00:26:04.444 [2024-12-06 18:02:52.057232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.057247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.444 [2024-12-06 18:02:52.064601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee9168 00:26:04.444 [2024-12-06 18:02:52.065628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.444 [2024-12-06 18:02:52.065644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.072958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee2c28 00:26:04.445 [2024-12-06 18:02:52.073978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.073994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.081347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9f68 00:26:04.445 [2024-12-06 18:02:52.082361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.082377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.089726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efdeb0 00:26:04.445 [2024-12-06 18:02:52.090741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.090759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.098105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee1f80 00:26:04.445 [2024-12-06 18:02:52.099130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.099147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.106482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efc998 00:26:04.445 [2024-12-06 18:02:52.107502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.107518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.114837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efb048 00:26:04.445 [2024-12-06 18:02:52.115853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.115869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.123209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef57b0 00:26:04.445 [2024-12-06 18:02:52.124243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.124259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.131590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee99d8 00:26:04.445 [2024-12-06 18:02:52.132605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.132621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.139962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef4298 00:26:04.445 [2024-12-06 18:02:52.140977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.140993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.148344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef46d0 00:26:04.445 [2024-12-06 18:02:52.149324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.149341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.156696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef0ff8 00:26:04.445 [2024-12-06 18:02:52.157711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.157727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.165056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeaab8 00:26:04.445 [2024-12-06 18:02:52.166112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.166128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.173491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee0ea0 00:26:04.445 [2024-12-06 18:02:52.174479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.174496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.181874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef96f8 00:26:04.445 [2024-12-06 18:02:52.182899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.182915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.190398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efe720 00:26:04.445 [2024-12-06 18:02:52.191422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.191438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.198771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ede8a8 00:26:04.445 [2024-12-06 18:02:52.199794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.199810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.207142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee4140 00:26:04.445 [2024-12-06 18:02:52.208177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.208193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.215592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef9b30 00:26:04.445 [2024-12-06 18:02:52.216635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.216652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.223976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ef6020 00:26:04.445 [2024-12-06 18:02:52.224993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.225009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.232356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016efac10 00:26:04.445 [2024-12-06 18:02:52.233372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.233389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.240734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee12d8 00:26:04.445 [2024-12-06 18:02:52.241754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.241770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.249090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016ee8088 00:26:04.445 [2024-12-06 18:02:52.250092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.250111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.257447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eef6a8 00:26:04.445 [2024-12-06 18:02:52.258462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.258479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.445 [2024-12-06 18:02:52.265819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb279d0) with pdu=0x200016eeee38 00:26:04.445 30297.50 IOPS, 118.35 MiB/s [2024-12-06T17:02:52.272Z] [2024-12-06 18:02:52.267043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.445 [2024-12-06 18:02:52.267058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:04.703 00:26:04.703 Latency(us) 00:26:04.703 [2024-12-06T17:02:52.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.703 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:04.703 nvme0n1 : 2.00 30319.87 118.44 0.00 0.00 4216.34 2129.92 15073.28 00:26:04.703 [2024-12-06T17:02:52.530Z] =================================================================================================================== 00:26:04.703 [2024-12-06T17:02:52.530Z] Total : 30319.87 118.44 0.00 0.00 4216.34 2129.92 15073.28 00:26:04.703 { 00:26:04.703 "results": [ 00:26:04.703 { 00:26:04.703 "job": "nvme0n1", 00:26:04.703 "core_mask": "0x2", 00:26:04.703 "workload": "randwrite", 00:26:04.703 "status": "finished", 00:26:04.703 "queue_depth": 128, 00:26:04.703 "io_size": 4096, 00:26:04.703 "runtime": 2.00489, 00:26:04.703 "iops": 30319.86792292844, 00:26:04.703 "mibps": 118.43698407393921, 00:26:04.703 "io_failed": 0, 00:26:04.703 "io_timeout": 0, 00:26:04.703 "avg_latency_us": 4216.34296593626, 00:26:04.703 "min_latency_us": 2129.92, 00:26:04.703 "max_latency_us": 15073.28 00:26:04.703 } 00:26:04.703 ], 00:26:04.703 "core_count": 1 00:26:04.703 } 00:26:04.703 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:04.703 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:04.703 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:04.703 | .driver_specific 00:26:04.703 | .nvme_error 00:26:04.703 | .status_code 00:26:04.703 | .command_transient_transport_error' 00:26:04.703 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3201258 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3201258 ']' 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3201258 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201258 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201258' 00:26:04.704 killing process with pid 3201258 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3201258 00:26:04.704 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.704 00:26:04.704 Latency(us) 00:26:04.704 [2024-12-06T17:02:52.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.704 [2024-12-06T17:02:52.531Z] =================================================================================================================== 00:26:04.704 [2024-12-06T17:02:52.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.704 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3201258 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3201867 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3201867 /var/tmp/bperf.sock 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3201867 ']' 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:04.962 [2024-12-06 18:02:52.625698] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:04.962 [2024-12-06 18:02:52.625756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201867 ] 00:26:04.962 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.962 Zero copy mechanism will not be used. 00:26:04.962 [2024-12-06 18:02:52.689796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.962 [2024-12-06 18:02:52.719568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.962 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:05.221 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:05.221 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.221 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.221 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.221 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.221 18:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.480 nvme0n1 00:26:05.480 18:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:05.480 18:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.480 18:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.480 18:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.480 18:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:05.480 18:02:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.480 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.480 Zero copy mechanism will not be used. 00:26:05.480 Running I/O for 2 seconds... 00:26:05.480 [2024-12-06 18:02:53.268728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.480 [2024-12-06 18:02:53.268995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.480 [2024-12-06 18:02:53.269021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.480 [2024-12-06 18:02:53.278023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.480 [2024-12-06 18:02:53.278237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.480 [2024-12-06 18:02:53.278256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.480 [2024-12-06 18:02:53.288212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.480 [2024-12-06 18:02:53.288410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.480 [2024-12-06 18:02:53.288426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.480 [2024-12-06 18:02:53.296988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.480 [2024-12-06 18:02:53.297180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.480 [2024-12-06 18:02:53.297202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.480 [2024-12-06 18:02:53.306255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.480 [2024-12-06 18:02:53.306452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.480 [2024-12-06 18:02:53.306468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.739 [2024-12-06 18:02:53.315613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.739 [2024-12-06 18:02:53.315852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.739 [2024-12-06 18:02:53.315868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.739 [2024-12-06 18:02:53.324518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.739 [2024-12-06 18:02:53.324782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.739 [2024-12-06 18:02:53.324801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.739 [2024-12-06 18:02:53.334669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.739 [2024-12-06 18:02:53.334800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.739 [2024-12-06 18:02:53.334816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.739 [2024-12-06 18:02:53.344364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.739 [2024-12-06 18:02:53.344568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.739 [2024-12-06 18:02:53.344584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.739 [2024-12-06 18:02:53.355207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.739 [2024-12-06 18:02:53.355422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.739 [2024-12-06 18:02:53.355438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.365860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.366061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.366077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.375550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.375769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.375786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.386154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.386374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.386390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.396178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.396423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.396441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.404761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.404965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.404981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.414135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.414361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.414379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.424655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.424829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.424844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.435256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.435559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.435576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.445197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.445397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.445413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.454751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.454952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.454969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.464516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.464719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.464735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.474286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.474498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.474514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.484478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.484679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.484695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.494131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.494260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.494276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.504073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.504450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.504468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.513976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.514323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.514340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.523401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.523640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.523656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.532568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.532787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.532803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.542596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.542843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.542860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.552373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.552576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.552594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.740 [2024-12-06 18:02:53.562197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.740 [2024-12-06 18:02:53.562369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.740 [2024-12-06 18:02:53.562385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:05.999 [2024-12-06 18:02:53.571790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.999 [2024-12-06 18:02:53.571996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.999 [2024-12-06 18:02:53.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:05.999 [2024-12-06 18:02:53.581334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.999 [2024-12-06 18:02:53.581541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.999 [2024-12-06 18:02:53.581557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:05.999 [2024-12-06 18:02:53.589898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.999 [2024-12-06 18:02:53.590072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.999 [2024-12-06 18:02:53.590088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:05.999 [2024-12-06 18:02:53.598591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:05.999 [2024-12-06 18:02:53.598832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.598849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.607388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.607690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.607707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.616405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.616561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.616578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.625286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.625522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.625538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.633092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.633332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.633353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.641353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.641516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.641532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.649939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.650196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.650213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.657087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.657419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.657438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.664938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.665140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.665158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.673002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.673200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.673218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.676627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.676792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.676809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.682044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.682216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.682234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.684801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.685032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.685050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.688009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.688206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.688224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.696612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.696769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.696786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.705530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.705776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.705793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.714893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.715060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.715077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.723470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.723714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.723731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.731981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.732188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.732206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.739802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.740098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.000 [2024-12-06 18:02:53.740121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.000 [2024-12-06 18:02:53.747735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.000 [2024-12-06 18:02:53.748073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.748090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.756110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.756276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.756293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.764538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.764750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.764765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.772695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.772733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.772749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.779661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.779705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.779721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.788018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.788058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.788074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.796608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.796803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.796819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.804960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.805176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.805193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.813946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.814124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.814140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.001 [2024-12-06 18:02:53.823475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.001 [2024-12-06 18:02:53.823681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.001 [2024-12-06 18:02:53.823698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.832946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.833203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.833223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.843130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.843349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.843365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.852897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.853143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.853159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.862627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.862788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.862804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.872921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.873185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.873201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.883176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.883448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.883466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.893121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.893322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.893337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.902789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.902962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.902978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.911754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.911908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.911924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.921156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.921325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.921342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.930608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.930793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.930810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.940158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.940281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.940297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.949506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.949607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.949623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.959657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.959839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.959855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.969825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.970063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.970080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.978550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.978717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.978733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.988167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.988383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.988400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:53.997349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:53.997529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:53.997545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:54.007037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:54.007250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:54.007266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:54.016237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:54.016476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:54.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:54.025859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:54.026058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:54.026074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:54.034798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.260 [2024-12-06 18:02:54.035026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.260 [2024-12-06 18:02:54.035042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.260 [2024-12-06 18:02:54.039346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.039463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.039479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.043743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.043787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.043803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.050181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.050237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.050252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.057357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.057571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.057587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.063440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.063601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.063619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.069254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.069303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.069319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.073554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.073605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.073620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.080197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.080236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.080252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.261 [2024-12-06 18:02:54.085003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.261 [2024-12-06 18:02:54.085043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.261 [2024-12-06 18:02:54.085058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.090589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.090811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.090829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.098321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.098541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.098557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.107798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.107948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.107964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.117045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.117196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.117212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.125765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.125981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.125998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.134933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.135117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.135133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.143423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.143693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.143710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.146690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.146733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.146749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.149421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.149463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.149479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.152155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.152195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.152211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.155721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.155831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.155847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.164163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.164320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.164336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.171147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.171209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.171225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.176680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.176742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.176758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.181846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.181922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.187012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.187073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.187089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.192193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.192256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.192272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.196891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.196953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.196969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.201066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.201133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.201149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.205529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.205591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.205607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.208604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.208667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.208683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.211547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.211667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.211687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.216069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.216228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.216244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.225318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.225544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.522 [2024-12-06 18:02:54.225561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.522 [2024-12-06 18:02:54.233889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.522 [2024-12-06 18:02:54.234062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.234078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.241533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.241648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.250845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.251017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.251034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.259305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.259450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.259466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.523 3797.00 IOPS, 474.62 MiB/s [2024-12-06T17:02:54.350Z] [2024-12-06 18:02:54.268689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.268811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.268827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.277406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.277630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.277645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.288173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.288384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.298404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.298546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.298563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.309342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.309527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.309544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.318500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.318556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.318571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.324081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.324128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.324144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.330054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.330281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.330297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.335960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.336038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.336054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.523 [2024-12-06 18:02:54.344477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.523 [2024-12-06 18:02:54.344518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.523 [2024-12-06 18:02:54.344533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.352763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.783 [2024-12-06 18:02:54.352951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.783 [2024-12-06 18:02:54.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.362557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.783 [2024-12-06 18:02:54.362737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.783 [2024-12-06 18:02:54.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.370542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.783 [2024-12-06 18:02:54.370644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.783 [2024-12-06 18:02:54.370660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.379136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.783 [2024-12-06 18:02:54.379349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.783 [2024-12-06 18:02:54.379365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.382365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.783 [2024-12-06 18:02:54.382405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.783 [2024-12-06 18:02:54.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.384995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.783 [2024-12-06 18:02:54.385036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.783 [2024-12-06 18:02:54.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.783 [2024-12-06 18:02:54.387641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.387697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.390567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.390621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.390637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.393588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.393628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.393644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.397653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.397694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.397713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.402262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.402303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.402318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.405192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.405232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.405247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.408142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.408183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.408199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.411294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.411351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.411367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.414053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.414115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.414130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.419142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.419182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.419197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.425082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.425284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.425301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.431840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.432055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.432073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.437904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.437950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.437966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.442085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.442131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.442147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.449125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.449167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.449182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.456032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.456094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.456115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.464736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.464961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.464977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.473105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.473158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.473174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.478856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.478897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.478913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.481465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.481504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.481519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.484177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.484275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.484291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.487227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.487289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.487304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.495596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.495776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.495793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.499477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.499590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.499606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.506685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.506726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.506742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.514765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.514862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.514878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.784 [2024-12-06 18:02:54.522507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.784 [2024-12-06 18:02:54.522592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.784 [2024-12-06 18:02:54.522608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.531203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.531398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.531414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.540287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.540441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.540458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.549767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.549955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.549975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.557166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.557356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.557372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.565237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.565398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.565414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.570852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.570891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.570907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.575150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.575194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.575210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.579127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.579167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.579182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.583742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.583835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.583850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.588635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.588676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.588691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.593250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.593292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.593308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.597981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.598024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.598039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.785 [2024-12-06 18:02:54.602249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:06.785 [2024-12-06 18:02:54.602315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.785 [2024-12-06 18:02:54.602330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.610501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.610541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.610556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.614235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.614275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.614290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.616778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.616822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.616837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.619897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.619938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.619953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.623675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.623756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.623771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.630310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.630382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.630397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.637221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.637405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.637421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.643157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.643202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.643218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.645648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.645691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.645707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.648150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.648191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.648207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.650655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.650696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.650712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.653200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.653274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.653290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.657927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.657966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.657982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.664866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.664949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.664964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.673345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.673495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.673511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.682727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.682917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.682940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.692088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.692332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.692349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.701201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.701399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.701415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.709726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.709911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.709927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.717832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.718019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.718036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.726376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.726583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.726599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.735632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.735916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.735932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.744320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.744500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.744516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.753539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.753711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.753727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.762971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.763047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.763066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.769725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.769770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.769786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.772230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.772283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.772299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.774744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.774793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.774809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.777349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.777393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.777408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.783212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.783434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.783450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.787426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.787470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.787486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.789997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.790051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.792595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.792639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.792654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.795082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.795133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.795150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.797603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.797646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.797663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.800133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.800173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.800189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.802691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.802735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.802750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.805233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.805287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.805303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.808268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.808356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.808371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.813374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.813426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.813442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.820847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.821001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.821017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.825136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.825177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.044 [2024-12-06 18:02:54.825192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.044 [2024-12-06 18:02:54.827635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.044 [2024-12-06 18:02:54.827681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.827697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.830149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.830202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.830218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.832634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.832673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.832690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.835115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.835169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.835185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.837624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.837666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.837682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.840135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.840177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.840192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.842623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.842662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.842678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.845206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.845253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.845269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.848930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.848969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.848987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.853827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.853872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.853888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.857404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.857453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.860592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.860632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.860647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.867760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.045 [2024-12-06 18:02:54.867800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.045 [2024-12-06 18:02:54.867816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.045 [2024-12-06 18:02:54.870501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.870549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.870566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.873016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.873060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.873076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.875507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.875572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.875588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.878390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.878429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.878444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.883473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.883518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.883534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.889631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.889855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.889871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.895304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.895344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.895360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.899755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.899796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.899812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.903612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.903701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.903717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.907865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.907916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.907931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.912263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.912304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.912320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.917699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.917738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.917754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.920229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.920275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.920291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.922727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.922769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.922785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.925200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.925238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.925254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.927695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.927741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.927757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.930193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.930238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.930253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.932667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.932712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.932727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.935158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.935203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.935219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.937719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.937774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.937789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.940445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.940531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.940546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.948318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.305 [2024-12-06 18:02:54.948514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.305 [2024-12-06 18:02:54.948532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.305 [2024-12-06 18:02:54.957078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.957256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.957273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:54.966506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.966686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.966702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:54.975776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.975983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.975999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:54.983485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.983599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.983615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:54.991178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.991232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.991248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:54.993678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.993723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.993739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:54.996343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:54.996383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:54.996398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.000407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.000450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.000466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.006692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.006744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.006760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.012433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.012628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.012644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.016933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.017004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.017019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.021704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.021746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.021762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.027240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.027281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.027296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.031992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.032031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.032047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.036739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.036778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.036793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.042777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.042819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.042835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.045327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.045370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.045385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.047821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.047867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.047882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.050355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.050399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.050415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.052866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.052911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.052926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.055409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.055462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.055478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.058147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.058209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.058225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.062303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.062499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.062515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.071384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.071572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.079663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.079751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.079766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.085142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.085354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.085372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.090714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.090919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.090935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.098769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.098809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.098824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.103155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.103202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.103218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.105663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.105702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.105717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.108196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.108243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.108259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.110721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.306 [2024-12-06 18:02:55.110763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-12-06 18:02:55.110779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.306 [2024-12-06 18:02:55.113271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.307 [2024-12-06 18:02:55.113324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.307 [2024-12-06 18:02:55.113340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.307 [2024-12-06 18:02:55.115861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.307 [2024-12-06 18:02:55.115906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.307 [2024-12-06 18:02:55.115921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.307 [2024-12-06 18:02:55.118397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.307 [2024-12-06 18:02:55.118453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.307 [2024-12-06 18:02:55.118469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.307 [2024-12-06 18:02:55.120976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.307 [2024-12-06 18:02:55.121015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.307 [2024-12-06 18:02:55.121031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.307 [2024-12-06 18:02:55.123715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.307 [2024-12-06 18:02:55.123760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.307 [2024-12-06 18:02:55.123776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.307 [2024-12-06 18:02:55.129774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.307 [2024-12-06 18:02:55.129824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.307 [2024-12-06 18:02:55.129840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.565 [2024-12-06 18:02:55.133072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.565 [2024-12-06 18:02:55.133126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.565 [2024-12-06 18:02:55.133143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.565 [2024-12-06 18:02:55.139137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.565 [2024-12-06 18:02:55.139317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.565 [2024-12-06 18:02:55.139332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.147064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.147272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.147288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.152169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.152208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.152223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.158222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.158260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.158276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.162873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.162917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.162933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.165375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.165420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.165436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.167879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.167926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.167941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.170397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.170438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.170454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.174778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.174953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.174969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.182496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.182559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.182575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.190352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.190599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.190615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.198208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.198431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.198447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.207545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.207699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.207718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.217361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.217571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.217588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.226191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.226384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.226400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.230499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.230545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.230561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.233002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.233048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.233064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.235507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.235553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.235569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.237990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.238034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.238050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.240519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.240570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.240586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.242996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.243063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.243079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.247915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.248115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.248131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.256180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.256378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.256394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:07.566 [2024-12-06 18:02:55.264735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb27eb0) with pdu=0x200016eff3c8 00:26:07.566 [2024-12-06 18:02:55.264915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.566 [2024-12-06 18:02:55.264931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:07.566 4931.00 IOPS, 616.38 MiB/s 00:26:07.566 Latency(us) 00:26:07.566 [2024-12-06T17:02:55.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:07.566 nvme0n1 : 2.01 4925.92 615.74 0.00 0.00 3241.61 1099.09 12561.07 00:26:07.566 [2024-12-06T17:02:55.393Z] =================================================================================================================== 00:26:07.566 [2024-12-06T17:02:55.393Z] Total : 4925.92 615.74 0.00 0.00 3241.61 1099.09 12561.07 00:26:07.566 { 00:26:07.566 "results": [ 00:26:07.566 { 00:26:07.566 "job": "nvme0n1", 00:26:07.566 "core_mask": "0x2", 00:26:07.566 "workload": "randwrite", 00:26:07.566 "status": "finished", 00:26:07.566 "queue_depth": 16, 00:26:07.566 "io_size": 131072, 00:26:07.566 "runtime": 2.006121, 00:26:07.566 "iops": 4925.924208958482, 00:26:07.566 "mibps": 615.7405261198103, 00:26:07.566 "io_failed": 0, 00:26:07.566 "io_timeout": 0, 00:26:07.566 "avg_latency_us": 3241.605915131889, 00:26:07.566 "min_latency_us": 1099.0933333333332, 00:26:07.566 "max_latency_us": 12561.066666666668 00:26:07.566 } 00:26:07.566 ], 00:26:07.566 "core_count": 1 00:26:07.566 } 00:26:07.566 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:07.566 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:07.566 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:07.566 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:07.566 | .driver_specific 00:26:07.566 | .nvme_error 00:26:07.566 | .status_code 00:26:07.566 | .command_transient_transport_error' 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 319 > 0 )) 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3201867 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3201867 ']' 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3201867 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3201867 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3201867' 00:26:07.825 killing process with pid 3201867 00:26:07.825 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3201867 00:26:07.825 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.825 00:26:07.825 Latency(us) 00:26:07.825 [2024-12-06T17:02:55.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.825 [2024-12-06T17:02:55.652Z] =================================================================================================================== 00:26:07.825 [2024-12-06T17:02:55.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3201867 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3199668 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3199668 ']' 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3199668 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:07.826 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199668 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199668' 00:26:08.086 killing process with pid 3199668 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3199668 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3199668 00:26:08.086 00:26:08.086 real 0m12.607s 00:26:08.086 user 0m24.965s 00:26:08.086 sys 0m2.882s 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.086 ************************************ 00:26:08.086 END TEST nvmf_digest_error 00:26:08.086 ************************************ 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:08.086 rmmod nvme_tcp 00:26:08.086 rmmod nvme_fabrics 00:26:08.086 rmmod nvme_keyring 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3199668 ']' 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3199668 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3199668 ']' 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3199668 00:26:08.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3199668) - No such process 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3199668 is not found' 00:26:08.086 Process with pid 3199668 is not found 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.086 18:02:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:10.621 00:26:10.621 real 0m34.691s 00:26:10.621 user 0m54.151s 00:26:10.621 sys 0m10.124s 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:10.621 ************************************ 00:26:10.621 END TEST nvmf_digest 00:26:10.621 ************************************ 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.621 ************************************ 00:26:10.621 START TEST nvmf_bdevperf 00:26:10.621 ************************************ 00:26:10.621 18:02:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:10.621 * Looking for test storage... 00:26:10.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:10.621 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.622 --rc genhtml_branch_coverage=1 00:26:10.622 --rc genhtml_function_coverage=1 00:26:10.622 --rc genhtml_legend=1 00:26:10.622 --rc geninfo_all_blocks=1 00:26:10.622 --rc geninfo_unexecuted_blocks=1 00:26:10.622 00:26:10.622 ' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.622 --rc genhtml_branch_coverage=1 00:26:10.622 --rc genhtml_function_coverage=1 00:26:10.622 --rc genhtml_legend=1 00:26:10.622 --rc geninfo_all_blocks=1 00:26:10.622 --rc geninfo_unexecuted_blocks=1 00:26:10.622 00:26:10.622 ' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.622 --rc genhtml_branch_coverage=1 00:26:10.622 --rc genhtml_function_coverage=1 00:26:10.622 --rc genhtml_legend=1 00:26:10.622 --rc geninfo_all_blocks=1 00:26:10.622 --rc geninfo_unexecuted_blocks=1 00:26:10.622 00:26:10.622 ' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:10.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.622 --rc genhtml_branch_coverage=1 00:26:10.622 --rc genhtml_function_coverage=1 00:26:10.622 --rc genhtml_legend=1 00:26:10.622 --rc geninfo_all_blocks=1 00:26:10.622 --rc geninfo_unexecuted_blocks=1 00:26:10.622 00:26:10.622 ' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:10.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:10.622 18:02:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:15.894 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:15.895 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:15.895 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:15.895 Found net devices under 0000:31:00.0: cvl_0_0 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:15.895 Found net devices under 0000:31:00.1: cvl_0_1 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:15.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:26:15.895 00:26:15.895 --- 10.0.0.2 ping statistics --- 00:26:15.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.895 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:26:15.895 00:26:15.895 --- 10.0.0.1 ping statistics --- 00:26:15.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.895 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.895 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3206982 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3206982 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3206982 ']' 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:15.896 [2024-12-06 18:03:03.437701] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:15.896 [2024-12-06 18:03:03.437752] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.896 [2024-12-06 18:03:03.509607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:15.896 [2024-12-06 18:03:03.539623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.896 [2024-12-06 18:03:03.539652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.896 [2024-12-06 18:03:03.539659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.896 [2024-12-06 18:03:03.539665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.896 [2024-12-06 18:03:03.539671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.896 [2024-12-06 18:03:03.540875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.896 [2024-12-06 18:03:03.540991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.896 [2024-12-06 18:03:03.540994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 [2024-12-06 18:03:03.644488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 Malloc0 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.896 [2024-12-06 18:03:03.702887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:15.896 { 00:26:15.896 "params": { 00:26:15.896 "name": "Nvme$subsystem", 00:26:15.896 "trtype": "$TEST_TRANSPORT", 00:26:15.896 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.896 "adrfam": "ipv4", 00:26:15.896 "trsvcid": "$NVMF_PORT", 00:26:15.896 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.896 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.896 "hdgst": ${hdgst:-false}, 00:26:15.896 "ddgst": ${ddgst:-false} 00:26:15.896 }, 00:26:15.896 "method": "bdev_nvme_attach_controller" 00:26:15.896 } 00:26:15.896 EOF 00:26:15.896 )") 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:15.896 18:03:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:15.896 "params": { 00:26:15.896 "name": "Nvme1", 00:26:15.896 "trtype": "tcp", 00:26:15.896 "traddr": "10.0.0.2", 00:26:15.896 "adrfam": "ipv4", 00:26:15.896 "trsvcid": "4420", 00:26:15.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:15.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:15.896 "hdgst": false, 00:26:15.896 "ddgst": false 00:26:15.896 }, 00:26:15.896 "method": "bdev_nvme_attach_controller" 00:26:15.896 }' 00:26:16.155 [2024-12-06 18:03:03.740619] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:16.155 [2024-12-06 18:03:03.740667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207174 ] 00:26:16.155 [2024-12-06 18:03:03.819148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.155 [2024-12-06 18:03:03.855486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.418 Running I/O for 1 seconds... 00:26:17.378 11185.00 IOPS, 43.69 MiB/s 00:26:17.378 Latency(us) 00:26:17.378 [2024-12-06T17:03:05.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.378 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:17.378 Verification LBA range: start 0x0 length 0x4000 00:26:17.378 Nvme1n1 : 1.01 11266.34 44.01 0.00 0.00 11296.90 1815.89 12124.16 00:26:17.378 [2024-12-06T17:03:05.205Z] =================================================================================================================== 00:26:17.378 [2024-12-06T17:03:05.205Z] Total : 11266.34 44.01 0.00 0.00 11296.90 1815.89 12124.16 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3207531 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.636 { 00:26:17.636 "params": { 00:26:17.636 "name": "Nvme$subsystem", 00:26:17.636 "trtype": "$TEST_TRANSPORT", 00:26:17.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.636 "adrfam": "ipv4", 00:26:17.636 "trsvcid": "$NVMF_PORT", 00:26:17.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.636 "hdgst": ${hdgst:-false}, 00:26:17.636 "ddgst": ${ddgst:-false} 00:26:17.636 }, 00:26:17.636 "method": "bdev_nvme_attach_controller" 00:26:17.636 } 00:26:17.636 EOF 00:26:17.636 )") 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:17.636 18:03:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:17.636 "params": { 00:26:17.636 "name": "Nvme1", 00:26:17.636 "trtype": "tcp", 00:26:17.636 "traddr": "10.0.0.2", 00:26:17.636 "adrfam": "ipv4", 00:26:17.636 "trsvcid": "4420", 00:26:17.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.636 "hdgst": false, 00:26:17.636 "ddgst": false 00:26:17.636 }, 00:26:17.636 "method": "bdev_nvme_attach_controller" 00:26:17.636 }' 00:26:17.636 [2024-12-06 18:03:05.248817] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:17.636 [2024-12-06 18:03:05.248870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207531 ] 00:26:17.637 [2024-12-06 18:03:05.326536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.637 [2024-12-06 18:03:05.361703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.894 Running I/O for 15 seconds... 00:26:20.206 11222.00 IOPS, 43.84 MiB/s [2024-12-06T17:03:08.294Z] 11476.00 IOPS, 44.83 MiB/s [2024-12-06T17:03:08.294Z] 18:03:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3206982 00:26:20.467 18:03:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:20.467 [2024-12-06 18:03:08.229894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.229930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.229945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.229952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.229962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.229977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.229982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.229991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.467 [2024-12-06 18:03:08.230058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.467 [2024-12-06 18:03:08.230543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.467 [2024-12-06 18:03:08.230550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.230987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.230992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.231000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.231005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.231013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.231020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.468 [2024-12-06 18:03:08.231027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.468 [2024-12-06 18:03:08.231033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.469 [2024-12-06 18:03:08.231492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.469 [2024-12-06 18:03:08.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.469 [2024-12-06 18:03:08.231518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.469 [2024-12-06 18:03:08.231524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.470 [2024-12-06 18:03:08.231530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.470 [2024-12-06 18:03:08.231543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.470 [2024-12-06 18:03:08.231556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.470 [2024-12-06 18:03:08.231567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:20.470 [2024-12-06 18:03:08.231580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.470 [2024-12-06 18:03:08.231666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.231673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1116660 is same with the state(6) to be set 00:26:20.470 [2024-12-06 18:03:08.231680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:20.470 [2024-12-06 18:03:08.231685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:20.470 [2024-12-06 18:03:08.231690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110520 len:8 PRP1 0x0 PRP2 0x0 00:26:20.470 [2024-12-06 18:03:08.231697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.470 [2024-12-06 18:03:08.234233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.470 [2024-12-06 18:03:08.234276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.470 [2024-12-06 18:03:08.234914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.470 [2024-12-06 18:03:08.234927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.470 [2024-12-06 18:03:08.234934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.470 [2024-12-06 18:03:08.235085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.470 [2024-12-06 18:03:08.235241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.470 [2024-12-06 18:03:08.235248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.470 [2024-12-06 18:03:08.235256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.470 [2024-12-06 18:03:08.235263] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.470 [2024-12-06 18:03:08.247075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.470 [2024-12-06 18:03:08.247577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.470 [2024-12-06 18:03:08.247592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.470 [2024-12-06 18:03:08.247598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.470 [2024-12-06 18:03:08.247749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.470 [2024-12-06 18:03:08.247900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.470 [2024-12-06 18:03:08.247906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.470 [2024-12-06 18:03:08.247912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.470 [2024-12-06 18:03:08.247922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.470 [2024-12-06 18:03:08.259745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.470 [2024-12-06 18:03:08.260373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.470 [2024-12-06 18:03:08.260405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.470 [2024-12-06 18:03:08.260415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.470 [2024-12-06 18:03:08.260584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.470 [2024-12-06 18:03:08.260738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.470 [2024-12-06 18:03:08.260745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.470 [2024-12-06 18:03:08.260751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.470 [2024-12-06 18:03:08.260757] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.470 [2024-12-06 18:03:08.272451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.470 [2024-12-06 18:03:08.272948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.470 [2024-12-06 18:03:08.272965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.470 [2024-12-06 18:03:08.272971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.470 [2024-12-06 18:03:08.273127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.470 [2024-12-06 18:03:08.273279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.470 [2024-12-06 18:03:08.273285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.470 [2024-12-06 18:03:08.273291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.470 [2024-12-06 18:03:08.273296] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.470 [2024-12-06 18:03:08.285133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.470 [2024-12-06 18:03:08.285686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.470 [2024-12-06 18:03:08.285718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.470 [2024-12-06 18:03:08.285727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.470 [2024-12-06 18:03:08.285894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.470 [2024-12-06 18:03:08.286047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.470 [2024-12-06 18:03:08.286054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.470 [2024-12-06 18:03:08.286061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.470 [2024-12-06 18:03:08.286066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.730 [2024-12-06 18:03:08.297758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.730 [2024-12-06 18:03:08.298215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.730 [2024-12-06 18:03:08.298247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.730 [2024-12-06 18:03:08.298256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.730 [2024-12-06 18:03:08.298424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.730 [2024-12-06 18:03:08.298578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.730 [2024-12-06 18:03:08.298585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.730 [2024-12-06 18:03:08.298591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.730 [2024-12-06 18:03:08.298597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.730 [2024-12-06 18:03:08.310498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.730 [2024-12-06 18:03:08.311005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.730 [2024-12-06 18:03:08.311021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.730 [2024-12-06 18:03:08.311027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.730 [2024-12-06 18:03:08.311185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.730 [2024-12-06 18:03:08.311338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.730 [2024-12-06 18:03:08.311345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.730 [2024-12-06 18:03:08.311350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.730 [2024-12-06 18:03:08.311356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.730 [2024-12-06 18:03:08.323173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.730 [2024-12-06 18:03:08.323655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.730 [2024-12-06 18:03:08.323669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.730 [2024-12-06 18:03:08.323675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.730 [2024-12-06 18:03:08.323825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.730 [2024-12-06 18:03:08.323976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.730 [2024-12-06 18:03:08.323983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.730 [2024-12-06 18:03:08.323988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.730 [2024-12-06 18:03:08.323992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.730 [2024-12-06 18:03:08.335851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.730 [2024-12-06 18:03:08.336401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.730 [2024-12-06 18:03:08.336415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.730 [2024-12-06 18:03:08.336421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.730 [2024-12-06 18:03:08.336574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.730 [2024-12-06 18:03:08.336725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.730 [2024-12-06 18:03:08.336732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.730 [2024-12-06 18:03:08.336737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.730 [2024-12-06 18:03:08.336742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.730 [2024-12-06 18:03:08.348441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.730 [2024-12-06 18:03:08.349000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.730 [2024-12-06 18:03:08.349032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.730 [2024-12-06 18:03:08.349042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.730 [2024-12-06 18:03:08.349213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.730 [2024-12-06 18:03:08.349367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.730 [2024-12-06 18:03:08.349375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.349381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.349387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.361062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.361527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.361544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.361550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.361701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.361851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.361858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.361864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.361869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.373692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.374314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.374346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.374356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.374522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.374675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.374686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.374692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.374697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.386384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.386891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.386907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.386914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.387064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.387220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.387228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.387234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.387240] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.399052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.399424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.399456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.399466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.399631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.399785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.399792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.399799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.399805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.411655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.412220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.412251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.412261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.412429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.412582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.412589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.412596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.412605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.424303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.424766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.424782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.424788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.424939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.425089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.425096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.425108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.425114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.436919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.437298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.437340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.437507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.437661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.437669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.437676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.437682] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.449646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.450209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.450241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.450251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.450419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.450573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.450580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.450586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.450592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.462278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.462891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.462922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.462932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.463097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.463258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.463266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.731 [2024-12-06 18:03:08.463272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.731 [2024-12-06 18:03:08.463279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.731 [2024-12-06 18:03:08.474948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.731 [2024-12-06 18:03:08.475540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.731 [2024-12-06 18:03:08.475572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.731 [2024-12-06 18:03:08.475582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.731 [2024-12-06 18:03:08.475747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.731 [2024-12-06 18:03:08.475901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.731 [2024-12-06 18:03:08.475908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.475914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.475922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.732 [2024-12-06 18:03:08.487607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.732 [2024-12-06 18:03:08.488231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.732 [2024-12-06 18:03:08.488264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.732 [2024-12-06 18:03:08.488274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.732 [2024-12-06 18:03:08.488441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.732 [2024-12-06 18:03:08.488595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.732 [2024-12-06 18:03:08.488602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.488608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.488614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.732 [2024-12-06 18:03:08.500294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.732 [2024-12-06 18:03:08.500906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.732 [2024-12-06 18:03:08.500939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.732 [2024-12-06 18:03:08.500948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.732 [2024-12-06 18:03:08.501125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.732 [2024-12-06 18:03:08.501279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.732 [2024-12-06 18:03:08.501287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.501292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.501298] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.732 [2024-12-06 18:03:08.512963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.732 [2024-12-06 18:03:08.513537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.732 [2024-12-06 18:03:08.513570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.732 [2024-12-06 18:03:08.513579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.732 [2024-12-06 18:03:08.513744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.732 [2024-12-06 18:03:08.513898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.732 [2024-12-06 18:03:08.513905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.513911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.513918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.732 [2024-12-06 18:03:08.525590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.732 [2024-12-06 18:03:08.526175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.732 [2024-12-06 18:03:08.526208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.732 [2024-12-06 18:03:08.526217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.732 [2024-12-06 18:03:08.526385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.732 [2024-12-06 18:03:08.526539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.732 [2024-12-06 18:03:08.526546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.526552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.526558] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.732 [2024-12-06 18:03:08.538235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.732 [2024-12-06 18:03:08.538771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.732 [2024-12-06 18:03:08.538802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.732 [2024-12-06 18:03:08.538813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.732 [2024-12-06 18:03:08.538980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.732 [2024-12-06 18:03:08.539140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.732 [2024-12-06 18:03:08.539151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.539157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.539165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.732 [2024-12-06 18:03:08.550832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.732 [2024-12-06 18:03:08.551348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.732 [2024-12-06 18:03:08.551364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.732 [2024-12-06 18:03:08.551371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.732 [2024-12-06 18:03:08.551521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.732 [2024-12-06 18:03:08.551672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.732 [2024-12-06 18:03:08.551679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.732 [2024-12-06 18:03:08.551684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.732 [2024-12-06 18:03:08.551689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.563510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.563958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.563973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.563979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.564133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.564285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.564291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.564297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.564303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.576105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.576590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.576604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.576610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.576760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.576910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.576917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.576923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.576932] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.588746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.589233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.589265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.589274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.589442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.589596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.589603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.589610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.589616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.601428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.601960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.601970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.602144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.602298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.602305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.602312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.602318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.614128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.614724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.614756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.614765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.614931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.615084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.615091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.615097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.615108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.626776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.627419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.627451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.627461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.627626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.627780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.627787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.627794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.627800] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.639476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.640041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.640073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.640082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.640256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.640411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.640418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.640424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.640430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.652096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.652717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.652749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.652758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.652924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.653077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.653084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.653090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.992 [2024-12-06 18:03:08.653097] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.992 [2024-12-06 18:03:08.664784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.992 [2024-12-06 18:03:08.665440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.992 [2024-12-06 18:03:08.665472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.992 [2024-12-06 18:03:08.665482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.992 [2024-12-06 18:03:08.665651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.992 [2024-12-06 18:03:08.665805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.992 [2024-12-06 18:03:08.665813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.992 [2024-12-06 18:03:08.665819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.665825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 10026.67 IOPS, 39.17 MiB/s [2024-12-06T17:03:08.820Z] [2024-12-06 18:03:08.678689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.679202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.679233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.679243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.679410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.679564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.679571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.679577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.679583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.691404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.691981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.692014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.692023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.692196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.692350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.692357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.692363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.692369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.704032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.704639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.704671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.704680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.704846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.705000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.705011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.705017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.705023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.716694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.717217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.717249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.717258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.717424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.717578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.717585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.717591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.717597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.729413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.730017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.730049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.730058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.730233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.730388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.730395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.730402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.730408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.742076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.742561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.742592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.742602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.742768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.742922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.742930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.742937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.742947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.754769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.755368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.755400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.755409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.755575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.755728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.755736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.755742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.755748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.767419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.768021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.768053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.768062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.768235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.768389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.768396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.768402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.768408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.780073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.780672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.780704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.780713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.780878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.781040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.993 [2024-12-06 18:03:08.781048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.993 [2024-12-06 18:03:08.781054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.993 [2024-12-06 18:03:08.781059] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.993 [2024-12-06 18:03:08.792730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.993 [2024-12-06 18:03:08.793231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.993 [2024-12-06 18:03:08.793263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.993 [2024-12-06 18:03:08.793272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.993 [2024-12-06 18:03:08.793440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.993 [2024-12-06 18:03:08.793594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.994 [2024-12-06 18:03:08.793601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.994 [2024-12-06 18:03:08.793607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.994 [2024-12-06 18:03:08.793612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:20.994 [2024-12-06 18:03:08.805430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:20.994 [2024-12-06 18:03:08.806028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.994 [2024-12-06 18:03:08.806060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:20.994 [2024-12-06 18:03:08.806069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:20.994 [2024-12-06 18:03:08.806242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:20.994 [2024-12-06 18:03:08.806396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:20.994 [2024-12-06 18:03:08.806404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:20.994 [2024-12-06 18:03:08.806410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:20.994 [2024-12-06 18:03:08.806416] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.818080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.818685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.818717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.818726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.818892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.819045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.255 [2024-12-06 18:03:08.819053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.255 [2024-12-06 18:03:08.819059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.255 [2024-12-06 18:03:08.819065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.830740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.831234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.831265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.831279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.831446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.831600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.255 [2024-12-06 18:03:08.831607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.255 [2024-12-06 18:03:08.831613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.255 [2024-12-06 18:03:08.831619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.843433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.843930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.843946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.843952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.844108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.844260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.255 [2024-12-06 18:03:08.844267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.255 [2024-12-06 18:03:08.844273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.255 [2024-12-06 18:03:08.844278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.856090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.856637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.856669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.856678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.856844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.856998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.255 [2024-12-06 18:03:08.857005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.255 [2024-12-06 18:03:08.857011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.255 [2024-12-06 18:03:08.857017] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.868691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.869319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.869356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.869365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.869531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.869684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.255 [2024-12-06 18:03:08.869695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.255 [2024-12-06 18:03:08.869701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.255 [2024-12-06 18:03:08.869707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.881388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.881974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.882006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.882015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.882188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.882342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.255 [2024-12-06 18:03:08.882349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.255 [2024-12-06 18:03:08.882355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.255 [2024-12-06 18:03:08.882361] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.255 [2024-12-06 18:03:08.894023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.255 [2024-12-06 18:03:08.894595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.255 [2024-12-06 18:03:08.894627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.255 [2024-12-06 18:03:08.894636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.255 [2024-12-06 18:03:08.894801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.255 [2024-12-06 18:03:08.894954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.894962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.894968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.894974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.906648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.907130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.907162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.907171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.907338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.907492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.907499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.907505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.907518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.919336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.919937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.919969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.919978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.920150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.920304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.920311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.920317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.920323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.931990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.932585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.932618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.932627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.932792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.932946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.932953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.932959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.932965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.944637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.945238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.945269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.945279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.945446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.945599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.945607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.945613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.945619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.957299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.957743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.957758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.957764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.957915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.958066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.958073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.958078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.958083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.969916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.970207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.970222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.970228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.970377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.970527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.970534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.970540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.970546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.982547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.983005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.983019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.983025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.983182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.983336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.983342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.983348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.983353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:08.995156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:08.995684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:08.995716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:08.995725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:08.995894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:08.996048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:08.996055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:08.996062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:08.996069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:09.007761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.256 [2024-12-06 18:03:09.008142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.256 [2024-12-06 18:03:09.008160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.256 [2024-12-06 18:03:09.008166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.256 [2024-12-06 18:03:09.008318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.256 [2024-12-06 18:03:09.008469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.256 [2024-12-06 18:03:09.008476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.256 [2024-12-06 18:03:09.008481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.256 [2024-12-06 18:03:09.008487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.256 [2024-12-06 18:03:09.020449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.257 [2024-12-06 18:03:09.020907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.257 [2024-12-06 18:03:09.020921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.257 [2024-12-06 18:03:09.020927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.257 [2024-12-06 18:03:09.021077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.257 [2024-12-06 18:03:09.021231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.257 [2024-12-06 18:03:09.021239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.257 [2024-12-06 18:03:09.021245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.257 [2024-12-06 18:03:09.021250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.257 [2024-12-06 18:03:09.033048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.257 [2024-12-06 18:03:09.033494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.257 [2024-12-06 18:03:09.033508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.257 [2024-12-06 18:03:09.033514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.257 [2024-12-06 18:03:09.033664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.257 [2024-12-06 18:03:09.033814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.257 [2024-12-06 18:03:09.033824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.257 [2024-12-06 18:03:09.033830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.257 [2024-12-06 18:03:09.033835] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.257 [2024-12-06 18:03:09.045637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.257 [2024-12-06 18:03:09.045967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.257 [2024-12-06 18:03:09.045980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.257 [2024-12-06 18:03:09.045986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.257 [2024-12-06 18:03:09.046139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.257 [2024-12-06 18:03:09.046290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.257 [2024-12-06 18:03:09.046297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.257 [2024-12-06 18:03:09.046302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.257 [2024-12-06 18:03:09.046307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.257 [2024-12-06 18:03:09.058259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.257 [2024-12-06 18:03:09.058707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.257 [2024-12-06 18:03:09.058722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.257 [2024-12-06 18:03:09.058727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.257 [2024-12-06 18:03:09.058877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.257 [2024-12-06 18:03:09.059027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.257 [2024-12-06 18:03:09.059034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.257 [2024-12-06 18:03:09.059039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.257 [2024-12-06 18:03:09.059044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.257 [2024-12-06 18:03:09.070888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.257 [2024-12-06 18:03:09.071496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.257 [2024-12-06 18:03:09.071528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.257 [2024-12-06 18:03:09.071537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.257 [2024-12-06 18:03:09.071703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.257 [2024-12-06 18:03:09.071856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.257 [2024-12-06 18:03:09.071863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.257 [2024-12-06 18:03:09.071870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.257 [2024-12-06 18:03:09.071880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.083565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.084162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.084194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.084204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.084370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.084523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.517 [2024-12-06 18:03:09.084530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.517 [2024-12-06 18:03:09.084536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.517 [2024-12-06 18:03:09.084542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.096220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.096820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.096852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.096861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.097027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.097188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.517 [2024-12-06 18:03:09.097196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.517 [2024-12-06 18:03:09.097203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.517 [2024-12-06 18:03:09.097209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.108876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.109467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.109500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.109509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.109674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.109828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.517 [2024-12-06 18:03:09.109835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.517 [2024-12-06 18:03:09.109842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.517 [2024-12-06 18:03:09.109848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.121518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.122123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.122155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.122164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.122331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.122485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.517 [2024-12-06 18:03:09.122492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.517 [2024-12-06 18:03:09.122498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.517 [2024-12-06 18:03:09.122504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.134174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.134778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.134809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.134819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.134985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.135145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.517 [2024-12-06 18:03:09.135153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.517 [2024-12-06 18:03:09.135160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.517 [2024-12-06 18:03:09.135167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.146835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.147430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.147462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.147471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.147637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.147791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.517 [2024-12-06 18:03:09.147798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.517 [2024-12-06 18:03:09.147804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.517 [2024-12-06 18:03:09.147811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.517 [2024-12-06 18:03:09.159491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.517 [2024-12-06 18:03:09.159967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.517 [2024-12-06 18:03:09.159983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.517 [2024-12-06 18:03:09.159990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.517 [2024-12-06 18:03:09.160147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.517 [2024-12-06 18:03:09.160299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.160306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.160311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.160316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.172123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.172651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.172683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.172692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.172858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.173012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.173020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.173025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.173031] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.184713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.185244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.185276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.185286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.185454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.185608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.185615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.185622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.185628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.197328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.197847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.197879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.197888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.198054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.198221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.198234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.198240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.198247] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.210063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.210572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.210589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.210595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.210746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.210896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.210903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.210908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.210914] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.222721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.223207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.223222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.223227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.223377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.223528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.223535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.223540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.223546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.235356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.235846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.235878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.235887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.236052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.236212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.236222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.236227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.236237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.248051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.518 [2024-12-06 18:03:09.248556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.518 [2024-12-06 18:03:09.248574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.518 [2024-12-06 18:03:09.248580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.518 [2024-12-06 18:03:09.248731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.518 [2024-12-06 18:03:09.248883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.518 [2024-12-06 18:03:09.248892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.518 [2024-12-06 18:03:09.248898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.518 [2024-12-06 18:03:09.248903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.518 [2024-12-06 18:03:09.260755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.261110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.261125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.261132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.261282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.261433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.261440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.261446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.261451] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.519 [2024-12-06 18:03:09.273376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.273825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.273840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.273846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.273996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.274151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.274159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.274164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.274169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.519 [2024-12-06 18:03:09.285982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.286577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.286609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.286619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.286786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.286940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.286947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.286953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.286959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.519 [2024-12-06 18:03:09.298637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.299135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.299152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.299158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.299309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.299461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.299467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.299473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.299478] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.519 [2024-12-06 18:03:09.311286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.311736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.311751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.311756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.311906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.312056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.312064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.312070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.312075] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.519 [2024-12-06 18:03:09.323895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.324461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.324494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.324503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.324673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.324827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.324834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.324841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.324847] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.519 [2024-12-06 18:03:09.336520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.519 [2024-12-06 18:03:09.337005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.519 [2024-12-06 18:03:09.337022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.519 [2024-12-06 18:03:09.337028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.519 [2024-12-06 18:03:09.337183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.519 [2024-12-06 18:03:09.337335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.519 [2024-12-06 18:03:09.337341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.519 [2024-12-06 18:03:09.337347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.519 [2024-12-06 18:03:09.337352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.349166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.349524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.349538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.349544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.349695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.349845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.349851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.349857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.349862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.361824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.362299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.362314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.362319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.362470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.362620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.362630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.362635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.362640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.374452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.374934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.374948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.374953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.375108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.375260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.375267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.375273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.375278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.387113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.387706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.387738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.387748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.387914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.388068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.388075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.388081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.388088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.399783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.400253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.400270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.400276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.400427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.400578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.400584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.400590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.400599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.412430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.412846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.412860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.412866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.413016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.413173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.413180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.413186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.413190] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.425160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.425639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.425653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.425658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.779 [2024-12-06 18:03:09.425808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.779 [2024-12-06 18:03:09.425958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.779 [2024-12-06 18:03:09.425965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.779 [2024-12-06 18:03:09.425971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.779 [2024-12-06 18:03:09.425976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.779 [2024-12-06 18:03:09.437802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.779 [2024-12-06 18:03:09.438271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.779 [2024-12-06 18:03:09.438286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.779 [2024-12-06 18:03:09.438291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.438441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.438592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.438599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.438605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.438610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.450436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.451011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.451043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.451053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.451228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.451382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.451389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.451396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.451402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.463091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.463556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.463588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.463598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.463764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.463918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.463925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.463931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.463937] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.475782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.476259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.476275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.476282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.476432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.476583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.476590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.476595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.476600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.488442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.488885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.488900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.488906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.489061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.489218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.489227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.489232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.489238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.501067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.501524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.501538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.501544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.501694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.501846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.501853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.501859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.501864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.513697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.514201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.514215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.514221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.514371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.514521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.514528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.514533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.514539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.526361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.526807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.526821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.526827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.526977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.527131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.527145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.527151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.527156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.538979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.539456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.539470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.539476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.539626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.539777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.539783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.539788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.780 [2024-12-06 18:03:09.539793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.780 [2024-12-06 18:03:09.551620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.780 [2024-12-06 18:03:09.552093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.780 [2024-12-06 18:03:09.552112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.780 [2024-12-06 18:03:09.552118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.780 [2024-12-06 18:03:09.552268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.780 [2024-12-06 18:03:09.552419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.780 [2024-12-06 18:03:09.552427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.780 [2024-12-06 18:03:09.552432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.781 [2024-12-06 18:03:09.552437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.781 [2024-12-06 18:03:09.564274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.781 [2024-12-06 18:03:09.564725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.781 [2024-12-06 18:03:09.564739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.781 [2024-12-06 18:03:09.564744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.781 [2024-12-06 18:03:09.564894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.781 [2024-12-06 18:03:09.565044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.781 [2024-12-06 18:03:09.565051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.781 [2024-12-06 18:03:09.565056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.781 [2024-12-06 18:03:09.565065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.781 [2024-12-06 18:03:09.576893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.781 [2024-12-06 18:03:09.577426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.781 [2024-12-06 18:03:09.577440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.781 [2024-12-06 18:03:09.577445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.781 [2024-12-06 18:03:09.577595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.781 [2024-12-06 18:03:09.577747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.781 [2024-12-06 18:03:09.577754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.781 [2024-12-06 18:03:09.577760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.781 [2024-12-06 18:03:09.577765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.781 [2024-12-06 18:03:09.589618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.781 [2024-12-06 18:03:09.590079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.781 [2024-12-06 18:03:09.590093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.781 [2024-12-06 18:03:09.590099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.781 [2024-12-06 18:03:09.590258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.781 [2024-12-06 18:03:09.590409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.781 [2024-12-06 18:03:09.590416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.781 [2024-12-06 18:03:09.590421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.781 [2024-12-06 18:03:09.590427] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.781 [2024-12-06 18:03:09.602308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.781 [2024-12-06 18:03:09.602764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.781 [2024-12-06 18:03:09.602778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:21.781 [2024-12-06 18:03:09.602783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:21.781 [2024-12-06 18:03:09.602933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:21.781 [2024-12-06 18:03:09.603082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.781 [2024-12-06 18:03:09.603089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.781 [2024-12-06 18:03:09.603095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.781 [2024-12-06 18:03:09.603105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.041 [2024-12-06 18:03:09.614975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.041 [2024-12-06 18:03:09.615593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.041 [2024-12-06 18:03:09.615607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.041 [2024-12-06 18:03:09.615613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.041 [2024-12-06 18:03:09.615763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.041 [2024-12-06 18:03:09.615913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.041 [2024-12-06 18:03:09.615920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.041 [2024-12-06 18:03:09.615925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.041 [2024-12-06 18:03:09.615930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.041 [2024-12-06 18:03:09.627650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.041 [2024-12-06 18:03:09.628137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.041 [2024-12-06 18:03:09.628151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.041 [2024-12-06 18:03:09.628157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.041 [2024-12-06 18:03:09.628310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.041 [2024-12-06 18:03:09.628460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.041 [2024-12-06 18:03:09.628467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.041 [2024-12-06 18:03:09.628472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.041 [2024-12-06 18:03:09.628477] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.041 [2024-12-06 18:03:09.640331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.041 [2024-12-06 18:03:09.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.041 [2024-12-06 18:03:09.640700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.041 [2024-12-06 18:03:09.640706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.041 [2024-12-06 18:03:09.640855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.041 [2024-12-06 18:03:09.641005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.641013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.641018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.641023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.652989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.653561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.653594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.653603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.653773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.653928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.653935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.653941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.653946] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.665638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.666110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.666128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.666135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.666285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.666436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.666443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.666449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.666454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.678271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.678860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.678892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.678901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.679068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 7520.00 IOPS, 29.38 MiB/s [2024-12-06T17:03:09.869Z] [2024-12-06 18:03:09.680413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.680420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.680427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.680432] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.690992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.691510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.691527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.691534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.691684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.691840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.691846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.691852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.691857] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.703686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.704146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.704168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.704175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.704330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.704481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.704487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.704493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.704498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.716361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.716814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.716829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.716835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.716985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.717142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.717149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.717155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.717159] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.728970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.729576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.729608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.729617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.729783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.729936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.729944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.729949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.729959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.741641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.742240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.742273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.742282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.742450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.742604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.742611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.742617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.742623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.754288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.754797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.754813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.754819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.042 [2024-12-06 18:03:09.754970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.042 [2024-12-06 18:03:09.755125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.042 [2024-12-06 18:03:09.755132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.042 [2024-12-06 18:03:09.755138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.042 [2024-12-06 18:03:09.755143] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.042 [2024-12-06 18:03:09.766960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.042 [2024-12-06 18:03:09.767442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.042 [2024-12-06 18:03:09.767457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.042 [2024-12-06 18:03:09.767463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.767613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.767763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.767770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.767775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.767780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.779590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.780203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.780235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.780245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.780413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.780566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.780573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.780579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.780585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.792262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.792613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.792630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.792636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.792786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.792937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.792944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.792949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.792954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.804899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.805512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.805544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.805554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.805719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.805873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.805880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.805887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.805893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.817560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.818154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.818186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.818199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.818368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.818521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.818528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.818533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.818539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.830207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.830792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.830824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.830834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.830999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.831159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.831167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.831172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.831178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.842838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.843334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.843351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.843358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.843508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.843659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.843666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.843671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.843677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.043 [2024-12-06 18:03:09.855481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.043 [2024-12-06 18:03:09.855934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.043 [2024-12-06 18:03:09.855949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.043 [2024-12-06 18:03:09.855954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.043 [2024-12-06 18:03:09.856114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.043 [2024-12-06 18:03:09.856265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.043 [2024-12-06 18:03:09.856276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.043 [2024-12-06 18:03:09.856281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.043 [2024-12-06 18:03:09.856286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.868089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.868606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.868638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.868648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.868814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.868968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.868975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.868981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.868987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.880800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.881280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.881303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.881454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.881605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.881611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.881617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.881622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.893444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.893934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.893949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.893955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.894113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.894265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.894272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.894278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.894286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.906106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.906653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.906685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.906694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.906860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.907015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.907022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.907028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.907034] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.918720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.919309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.919340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.919350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.919515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.919669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.919676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.919682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.919688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.931373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.932005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.932037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.932047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.932221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.932375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.932382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.932389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.932396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.944060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.944601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.944633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.944642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.303 [2024-12-06 18:03:09.944808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.303 [2024-12-06 18:03:09.944962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-12-06 18:03:09.944969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-12-06 18:03:09.944974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-12-06 18:03:09.944981] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-12-06 18:03:09.956803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-12-06 18:03:09.957410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-12-06 18:03:09.957442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-12-06 18:03:09.957451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:09.957616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:09.957770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:09.957777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:09.957783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:09.957789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:09.969461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:09.970055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:09.970087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:09.970096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:09.970272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:09.970426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:09.970433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:09.970439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:09.970445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:09.982113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:09.982706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:09.982738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:09.982750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:09.982916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:09.983070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:09.983077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:09.983083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:09.983089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:09.994781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:09.995261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:09.995278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:09.995284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:09.995435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:09.995586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:09.995593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:09.995598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:09.995603] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:10.007504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:10.007822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:10.007837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:10.007846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:10.007996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:10.008154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:10.008161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:10.008167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:10.008172] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:10.020142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:10.020722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:10.020755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:10.020764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:10.020930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:10.021084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:10.021095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:10.021108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:10.021115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:10.032797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:10.033358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:10.033390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:10.033400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:10.033566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:10.033720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:10.033728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:10.033734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:10.033740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:10.045414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:10.045981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:10.046013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:10.046023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:10.046195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:10.046349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:10.046356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:10.046362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:10.046369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:10.058046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:10.058613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:10.058645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:10.058655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.304 [2024-12-06 18:03:10.058821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.304 [2024-12-06 18:03:10.058975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.304 [2024-12-06 18:03:10.058982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.304 [2024-12-06 18:03:10.058988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.304 [2024-12-06 18:03:10.058998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.304 [2024-12-06 18:03:10.070676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.304 [2024-12-06 18:03:10.071204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.304 [2024-12-06 18:03:10.071236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.304 [2024-12-06 18:03:10.071245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.305 [2024-12-06 18:03:10.071414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.305 [2024-12-06 18:03:10.071569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.305 [2024-12-06 18:03:10.071575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.305 [2024-12-06 18:03:10.071581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.305 [2024-12-06 18:03:10.071587] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.305 [2024-12-06 18:03:10.083407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.305 [2024-12-06 18:03:10.083927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.305 [2024-12-06 18:03:10.083959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.305 [2024-12-06 18:03:10.083969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.305 [2024-12-06 18:03:10.084143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.305 [2024-12-06 18:03:10.084297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.305 [2024-12-06 18:03:10.084305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.305 [2024-12-06 18:03:10.084311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.305 [2024-12-06 18:03:10.084317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.305 [2024-12-06 18:03:10.096137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.305 [2024-12-06 18:03:10.096670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.305 [2024-12-06 18:03:10.096702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.305 [2024-12-06 18:03:10.096711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.305 [2024-12-06 18:03:10.096877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.305 [2024-12-06 18:03:10.097031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.305 [2024-12-06 18:03:10.097038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.305 [2024-12-06 18:03:10.097044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.305 [2024-12-06 18:03:10.097050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.305 [2024-12-06 18:03:10.108869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.305 [2024-12-06 18:03:10.109343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.305 [2024-12-06 18:03:10.109359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.305 [2024-12-06 18:03:10.109365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.305 [2024-12-06 18:03:10.109515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.305 [2024-12-06 18:03:10.109666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.305 [2024-12-06 18:03:10.109673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.305 [2024-12-06 18:03:10.109678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.305 [2024-12-06 18:03:10.109683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.305 [2024-12-06 18:03:10.121497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.305 [2024-12-06 18:03:10.122094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.305 [2024-12-06 18:03:10.122131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.305 [2024-12-06 18:03:10.122141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.305 [2024-12-06 18:03:10.122310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.305 [2024-12-06 18:03:10.122464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.305 [2024-12-06 18:03:10.122471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.305 [2024-12-06 18:03:10.122477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.305 [2024-12-06 18:03:10.122483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-12-06 18:03:10.134157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-12-06 18:03:10.134726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-12-06 18:03:10.134758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-12-06 18:03:10.134767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.564 [2024-12-06 18:03:10.134933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.564 [2024-12-06 18:03:10.135087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-12-06 18:03:10.135094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-12-06 18:03:10.135106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-12-06 18:03:10.135113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-12-06 18:03:10.146773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-12-06 18:03:10.147245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-12-06 18:03:10.147262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-12-06 18:03:10.147275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.564 [2024-12-06 18:03:10.147425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.564 [2024-12-06 18:03:10.147576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-12-06 18:03:10.147582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.147588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.147593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.159455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.160017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.160049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.160059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.160233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.160387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.160394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.160401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.160408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.172073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.172631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.172663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.172672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.172838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.172991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.172999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.173004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.173010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.184693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.185283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.185315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.185325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.185491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.185652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.185663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.185669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.185676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.197393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.197897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.197913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.197919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.198070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.198228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.198236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.198242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.198247] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.210060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.210552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.210566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.210572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.210722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.210872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.210879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.210884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.210889] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.222699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.223302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.223334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.223343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.223509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.223663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.223670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.223676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.223685] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.235373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.235831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.235862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.235871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.236037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.236199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.236207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.236213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.236219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.248032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.248626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.248658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.248668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.248834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.248987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.248995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.249001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.249007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.260696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.261292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.261325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.261334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.261500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.261653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-12-06 18:03:10.261661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-12-06 18:03:10.261668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-12-06 18:03:10.261674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-12-06 18:03:10.273363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-12-06 18:03:10.273868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-12-06 18:03:10.273884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-12-06 18:03:10.273890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.565 [2024-12-06 18:03:10.274040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.565 [2024-12-06 18:03:10.274197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.274205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.274211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.274216] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.286030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.286571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.286592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.286742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.286892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.286899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.286905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.286910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.298700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.299213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.299246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.299255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.299423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.299577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.299584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.299590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.299595] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.311422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.312001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.312032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.312042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.312219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.312374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.312381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.312387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.312393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.324067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.324654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.324686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.324695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.324861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.325015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.325022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.325027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.325033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.336707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.337233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.337265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.337274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.337440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.337594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.337602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.337608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.337615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.349434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.349900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.349916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.349922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.350073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.350230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.350240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.350246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.350251] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.362062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.362651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.362682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.362692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.362857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.363011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.363018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.363024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.363030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.374704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.375324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.375356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.375365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.375533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.375686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.375694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.375699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.375705] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.566 [2024-12-06 18:03:10.387389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.566 [2024-12-06 18:03:10.387874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.566 [2024-12-06 18:03:10.387906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.566 [2024-12-06 18:03:10.387916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.566 [2024-12-06 18:03:10.388082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.566 [2024-12-06 18:03:10.388245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.566 [2024-12-06 18:03:10.388253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.566 [2024-12-06 18:03:10.388260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.566 [2024-12-06 18:03:10.388270] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-12-06 18:03:10.400081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-12-06 18:03:10.400639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-12-06 18:03:10.400672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-12-06 18:03:10.400681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.827 [2024-12-06 18:03:10.400846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.827 [2024-12-06 18:03:10.401000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-12-06 18:03:10.401007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-12-06 18:03:10.401014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-12-06 18:03:10.401021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-12-06 18:03:10.412693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-12-06 18:03:10.413231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-12-06 18:03:10.413263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-12-06 18:03:10.413273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.827 [2024-12-06 18:03:10.413440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.827 [2024-12-06 18:03:10.413594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-12-06 18:03:10.413600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-12-06 18:03:10.413606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-12-06 18:03:10.413612] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-12-06 18:03:10.425290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-12-06 18:03:10.425734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-12-06 18:03:10.425765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-12-06 18:03:10.425774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.827 [2024-12-06 18:03:10.425940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.827 [2024-12-06 18:03:10.426093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-12-06 18:03:10.426108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-12-06 18:03:10.426114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-12-06 18:03:10.426120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-12-06 18:03:10.437925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-12-06 18:03:10.438482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-12-06 18:03:10.438514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-12-06 18:03:10.438523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.827 [2024-12-06 18:03:10.438689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.827 [2024-12-06 18:03:10.438842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-12-06 18:03:10.438849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-12-06 18:03:10.438855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-12-06 18:03:10.438861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-12-06 18:03:10.450532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-12-06 18:03:10.451031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-12-06 18:03:10.451047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-12-06 18:03:10.451053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.827 [2024-12-06 18:03:10.451208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.827 [2024-12-06 18:03:10.451360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-12-06 18:03:10.451366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-12-06 18:03:10.451372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-12-06 18:03:10.451377] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-12-06 18:03:10.463184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-12-06 18:03:10.463672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-12-06 18:03:10.463686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-12-06 18:03:10.463691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.463841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.463991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.463997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.464003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.464007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.475809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.476274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.476289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.476294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.476449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.476599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.476605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.476611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.476615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.488425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.488957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.488988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.488998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.489171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.489326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.489333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.489339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.489345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.501153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.501703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.501735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.501744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.501910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.502063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.502071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.502077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.502083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.513755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.514408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.514440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.514449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.514614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.514768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.514779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.514786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.514793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.526468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.527053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.527085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.527095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.527269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.527423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.527430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.527436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.527442] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.539108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.539719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.539751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.539760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.539925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.540079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.540086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.540092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.540098] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.551772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.552384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.828 [2024-12-06 18:03:10.552416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.828 [2024-12-06 18:03:10.552425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.828 [2024-12-06 18:03:10.552591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.828 [2024-12-06 18:03:10.552744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.828 [2024-12-06 18:03:10.552751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.828 [2024-12-06 18:03:10.552757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.828 [2024-12-06 18:03:10.552767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.828 [2024-12-06 18:03:10.564450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.828 [2024-12-06 18:03:10.565019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.565051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.565061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.565235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.565390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.565397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.565403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.565408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.829 [2024-12-06 18:03:10.577075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.829 [2024-12-06 18:03:10.577671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.577703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.577712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.577878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.578031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.578038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.578044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.578052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.829 [2024-12-06 18:03:10.589741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.829 [2024-12-06 18:03:10.590248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.590280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.590290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.590458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.590612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.590618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.590625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.590632] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.829 [2024-12-06 18:03:10.602448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.829 [2024-12-06 18:03:10.602905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.602921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.602928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.603078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.603236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.603244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.603249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.603254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.829 [2024-12-06 18:03:10.615059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.829 [2024-12-06 18:03:10.615646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.615678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.615688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.615853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.616007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.616014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.616020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.616025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.829 [2024-12-06 18:03:10.627700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.829 [2024-12-06 18:03:10.628180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.628213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.628222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.628390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.628544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.628551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.628557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.628563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.829 [2024-12-06 18:03:10.640384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.829 [2024-12-06 18:03:10.640984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.829 [2024-12-06 18:03:10.641016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:22.829 [2024-12-06 18:03:10.641025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:22.829 [2024-12-06 18:03:10.641200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:22.829 [2024-12-06 18:03:10.641355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.829 [2024-12-06 18:03:10.641362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.829 [2024-12-06 18:03:10.641369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.829 [2024-12-06 18:03:10.641376] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.090 [2024-12-06 18:03:10.653049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.090 [2024-12-06 18:03:10.653650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.090 [2024-12-06 18:03:10.653682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.090 [2024-12-06 18:03:10.653692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.090 [2024-12-06 18:03:10.653859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.090 [2024-12-06 18:03:10.654012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.090 [2024-12-06 18:03:10.654019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.090 [2024-12-06 18:03:10.654025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.090 [2024-12-06 18:03:10.654031] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.090 [2024-12-06 18:03:10.665714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.090 [2024-12-06 18:03:10.666296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.090 [2024-12-06 18:03:10.666328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.090 [2024-12-06 18:03:10.666338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.090 [2024-12-06 18:03:10.666503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.090 [2024-12-06 18:03:10.666657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.090 [2024-12-06 18:03:10.666664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.090 [2024-12-06 18:03:10.666670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.090 [2024-12-06 18:03:10.666676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.090 [2024-12-06 18:03:10.678348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.090 [2024-12-06 18:03:10.678897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.090 [2024-12-06 18:03:10.678929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.090 [2024-12-06 18:03:10.678938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.090 [2024-12-06 18:03:10.679111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.090 [2024-12-06 18:03:10.679270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.090 [2024-12-06 18:03:10.679281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.090 [2024-12-06 18:03:10.679287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.090 [2024-12-06 18:03:10.679293] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.090 6016.00 IOPS, 23.50 MiB/s [2024-12-06T17:03:10.917Z] [2024-12-06 18:03:10.691014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.090 [2024-12-06 18:03:10.691614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.090 [2024-12-06 18:03:10.691646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.090 [2024-12-06 18:03:10.691655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.090 [2024-12-06 18:03:10.691820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.090 [2024-12-06 18:03:10.691974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.090 [2024-12-06 18:03:10.691981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.090 [2024-12-06 18:03:10.691987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.090 [2024-12-06 18:03:10.691993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.703665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.704201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.704233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.704242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.704409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.704563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.704570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.704576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.704582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.716396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.716778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.716794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.716801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.716951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.717109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.717120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.717126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.717135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.729085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.729679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.729712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.729721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.729887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.730040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.730048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.730054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.730060] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.741729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.742318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.742350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.742360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.742528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.742681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.742689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.742695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.742702] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.754378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.754969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.755001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.755010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.755189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.755343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.755350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.755357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.755363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.767033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.767613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.767645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.767655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.767821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.767974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.767982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.767989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.767996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.779695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.780238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.780269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.780279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.091 [2024-12-06 18:03:10.780447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.091 [2024-12-06 18:03:10.780601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.091 [2024-12-06 18:03:10.780608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.091 [2024-12-06 18:03:10.780614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.091 [2024-12-06 18:03:10.780620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.091 [2024-12-06 18:03:10.792307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.091 [2024-12-06 18:03:10.793185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.091 [2024-12-06 18:03:10.793207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.091 [2024-12-06 18:03:10.793215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.793372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.793525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.793532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.793537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.793542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.804929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.805398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.805413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.805423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.805574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.805724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.805731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.805736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.805741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.817568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.818024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.818039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.818044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.818199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.818351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.818357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.818363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.818368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.830177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.830712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.830744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.830753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.830919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.831073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.831081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.831087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.831093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.842768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.843263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.843280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.843287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.843437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.843593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.843600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.843606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.843611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.855429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.855873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.855887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.855893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.856043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.856199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.856207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.856213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.856218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.868019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.868388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.868403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.868409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.868558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.092 [2024-12-06 18:03:10.868709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.092 [2024-12-06 18:03:10.868716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.092 [2024-12-06 18:03:10.868721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.092 [2024-12-06 18:03:10.868726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.092 [2024-12-06 18:03:10.880675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.092 [2024-12-06 18:03:10.881123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.092 [2024-12-06 18:03:10.881137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.092 [2024-12-06 18:03:10.881143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.092 [2024-12-06 18:03:10.881293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.093 [2024-12-06 18:03:10.881444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.093 [2024-12-06 18:03:10.881451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.093 [2024-12-06 18:03:10.881456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.093 [2024-12-06 18:03:10.881465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.093 [2024-12-06 18:03:10.893285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.093 [2024-12-06 18:03:10.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.093 [2024-12-06 18:03:10.893901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.093 [2024-12-06 18:03:10.893911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.093 [2024-12-06 18:03:10.894077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.093 [2024-12-06 18:03:10.894239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.093 [2024-12-06 18:03:10.894247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.093 [2024-12-06 18:03:10.894253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.093 [2024-12-06 18:03:10.894259] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.093 [2024-12-06 18:03:10.905927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.093 [2024-12-06 18:03:10.906402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.093 [2024-12-06 18:03:10.906418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.093 [2024-12-06 18:03:10.906424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.093 [2024-12-06 18:03:10.906574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.093 [2024-12-06 18:03:10.906725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.093 [2024-12-06 18:03:10.906732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.093 [2024-12-06 18:03:10.906737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.093 [2024-12-06 18:03:10.906742] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.354 [2024-12-06 18:03:10.918556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.354 [2024-12-06 18:03:10.919091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.354 [2024-12-06 18:03:10.919129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.354 [2024-12-06 18:03:10.919139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.354 [2024-12-06 18:03:10.919308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.354 [2024-12-06 18:03:10.919461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.354 [2024-12-06 18:03:10.919468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.354 [2024-12-06 18:03:10.919475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.354 [2024-12-06 18:03:10.919481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.354 [2024-12-06 18:03:10.931156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.354 [2024-12-06 18:03:10.931682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.354 [2024-12-06 18:03:10.931698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.354 [2024-12-06 18:03:10.931705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.354 [2024-12-06 18:03:10.931855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.354 [2024-12-06 18:03:10.932006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.354 [2024-12-06 18:03:10.932012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.354 [2024-12-06 18:03:10.932018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.354 [2024-12-06 18:03:10.932023] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.354 [2024-12-06 18:03:10.943836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.354 [2024-12-06 18:03:10.944319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.354 [2024-12-06 18:03:10.944351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.354 [2024-12-06 18:03:10.944361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.354 [2024-12-06 18:03:10.944528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.354 [2024-12-06 18:03:10.944682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.354 [2024-12-06 18:03:10.944689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.354 [2024-12-06 18:03:10.944695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.354 [2024-12-06 18:03:10.944701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.354 [2024-12-06 18:03:10.956530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.354 [2024-12-06 18:03:10.957018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.354 [2024-12-06 18:03:10.957035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.354 [2024-12-06 18:03:10.957041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.354 [2024-12-06 18:03:10.957198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.354 [2024-12-06 18:03:10.957349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.354 [2024-12-06 18:03:10.957357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.354 [2024-12-06 18:03:10.957362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.354 [2024-12-06 18:03:10.957367] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.354 [2024-12-06 18:03:10.969213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.354 [2024-12-06 18:03:10.969699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.354 [2024-12-06 18:03:10.969713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.354 [2024-12-06 18:03:10.969722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.354 [2024-12-06 18:03:10.969872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.354 [2024-12-06 18:03:10.970023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.354 [2024-12-06 18:03:10.970030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.354 [2024-12-06 18:03:10.970035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.354 [2024-12-06 18:03:10.970040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.354 [2024-12-06 18:03:10.981850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.354 [2024-12-06 18:03:10.982340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.354 [2024-12-06 18:03:10.982354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.354 [2024-12-06 18:03:10.982360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.354 [2024-12-06 18:03:10.982510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.354 [2024-12-06 18:03:10.982661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.354 [2024-12-06 18:03:10.982667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.354 [2024-12-06 18:03:10.982673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:10.982678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:10.994522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:10.994970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:10.994984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:10.994989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:10.995144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:10.995299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:10.995306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:10.995311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:10.995316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.007124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.007686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.007718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.007727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.007892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.008050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.008058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:11.008064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:11.008070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.019746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.020216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.020250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.020259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.020427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.020581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.020588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:11.020595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:11.020602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.032425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.032891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.032924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.032933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.033108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.033262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.033270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:11.033275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:11.033281] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.045105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.045681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.045713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.045722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.045889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.046043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.046050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:11.046056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:11.046065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.057753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.058245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.058277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.058287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.058455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.058609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.058616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:11.058622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:11.058628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.070446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.070920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.070937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.070943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.071094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.071253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.071260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.355 [2024-12-06 18:03:11.071265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.355 [2024-12-06 18:03:11.071270] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.355 [2024-12-06 18:03:11.083087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.355 [2024-12-06 18:03:11.083684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.355 [2024-12-06 18:03:11.083716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.355 [2024-12-06 18:03:11.083726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.355 [2024-12-06 18:03:11.083892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.355 [2024-12-06 18:03:11.084046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.355 [2024-12-06 18:03:11.084054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.084061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.084067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.095755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.096513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.096545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.096555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.096720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.096874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.096881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.096887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.096893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.108439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.108927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.108944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.108951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.109105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.109258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.109265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.109270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.109275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.121084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.121577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.121591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.121597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.121747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.121898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.121904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.121910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.121916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.133728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.134239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.134271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.134284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.134449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.134603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.134610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.134616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.134622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.146455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.146929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.146945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.146951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.147107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.147259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.147266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.147272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.147277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.159093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.159561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.159575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.159581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.159731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.159882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.159888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.159894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.159899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.356 [2024-12-06 18:03:11.171711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.356 [2024-12-06 18:03:11.172310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.356 [2024-12-06 18:03:11.172342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.356 [2024-12-06 18:03:11.172352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.356 [2024-12-06 18:03:11.172519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.356 [2024-12-06 18:03:11.172676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.356 [2024-12-06 18:03:11.172683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.356 [2024-12-06 18:03:11.172690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.356 [2024-12-06 18:03:11.172696] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.617 [2024-12-06 18:03:11.184377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.617 [2024-12-06 18:03:11.184803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.617 [2024-12-06 18:03:11.184835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.617 [2024-12-06 18:03:11.184844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.617 [2024-12-06 18:03:11.185010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.617 [2024-12-06 18:03:11.185171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.617 [2024-12-06 18:03:11.185179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.617 [2024-12-06 18:03:11.185185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.617 [2024-12-06 18:03:11.185191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.617 [2024-12-06 18:03:11.197033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.617 [2024-12-06 18:03:11.197577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.617 [2024-12-06 18:03:11.197608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.617 [2024-12-06 18:03:11.197618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.617 [2024-12-06 18:03:11.197786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.617 [2024-12-06 18:03:11.197939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.617 [2024-12-06 18:03:11.197947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.617 [2024-12-06 18:03:11.197954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.617 [2024-12-06 18:03:11.197960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.617 [2024-12-06 18:03:11.209650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.617 [2024-12-06 18:03:11.210232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.617 [2024-12-06 18:03:11.210264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.210274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.210443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.210596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.210604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.210609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.210620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 [2024-12-06 18:03:11.222298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.222786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.222803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.222809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.222959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.223114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.223121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.223127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.223132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3206982 Killed "${NVMF_APP[@]}" "$@" 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3209314 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3209314 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3209314 ']' 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.618 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.618 [2024-12-06 18:03:11.234994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.235496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.235512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.235518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.235668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.235820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.235827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.235835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.235841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 [2024-12-06 18:03:11.247671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.248126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.248141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.248147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.248297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.248448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.248455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.248460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.248465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 [2024-12-06 18:03:11.260282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.260853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.260885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.260894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.261060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.261220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.261228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.261234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.261241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 [2024-12-06 18:03:11.265130] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:23.618 [2024-12-06 18:03:11.265177] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.618 [2024-12-06 18:03:11.272908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.273383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.273400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.273407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.273559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.273711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.273722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.273728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.273734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 [2024-12-06 18:03:11.285549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.286004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.286017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.286024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.286177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.286328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.286335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.618 [2024-12-06 18:03:11.286341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.618 [2024-12-06 18:03:11.286346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.618 [2024-12-06 18:03:11.298166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.618 [2024-12-06 18:03:11.298612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.618 [2024-12-06 18:03:11.298644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.618 [2024-12-06 18:03:11.298654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.618 [2024-12-06 18:03:11.298820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.618 [2024-12-06 18:03:11.298974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.618 [2024-12-06 18:03:11.298981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.298988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.298995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.310813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.311317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.311334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.311340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.311491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.311641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.311648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.311654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.311659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.323472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.324004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.324017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.324023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.324178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.324329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.324336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.324341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.324346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.336104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.336670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.336702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.336711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.336877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.337031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.337038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.337044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.337051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.337424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.619 [2024-12-06 18:03:11.348734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.349390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.349423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.349433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.349599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.349753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.349760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.349766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.349772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.361463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.361960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.361993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.362002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.362175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.362329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.362336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.362342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.362348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.366747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.619 [2024-12-06 18:03:11.366769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.619 [2024-12-06 18:03:11.366776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.619 [2024-12-06 18:03:11.366782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.619 [2024-12-06 18:03:11.366787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.619 [2024-12-06 18:03:11.367874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.619 [2024-12-06 18:03:11.368031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.619 [2024-12-06 18:03:11.368033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.619 [2024-12-06 18:03:11.374170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.374800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.374831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.374842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.375009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.375168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.375176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.375183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.375189] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.386871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.387365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.387397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.387407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.387576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.387730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.387742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.387748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.387754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.399588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.400075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.619 [2024-12-06 18:03:11.400091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.619 [2024-12-06 18:03:11.400098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.619 [2024-12-06 18:03:11.400256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.619 [2024-12-06 18:03:11.400407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.619 [2024-12-06 18:03:11.400413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.619 [2024-12-06 18:03:11.400419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.619 [2024-12-06 18:03:11.400425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.619 [2024-12-06 18:03:11.412237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.619 [2024-12-06 18:03:11.412862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.620 [2024-12-06 18:03:11.412896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.620 [2024-12-06 18:03:11.412906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.620 [2024-12-06 18:03:11.413072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.620 [2024-12-06 18:03:11.413233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.620 [2024-12-06 18:03:11.413241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.620 [2024-12-06 18:03:11.413247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.620 [2024-12-06 18:03:11.413253] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.620 [2024-12-06 18:03:11.424918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.620 [2024-12-06 18:03:11.425555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.620 [2024-12-06 18:03:11.425587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.620 [2024-12-06 18:03:11.425597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.620 [2024-12-06 18:03:11.425763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.620 [2024-12-06 18:03:11.425917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.620 [2024-12-06 18:03:11.425924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.620 [2024-12-06 18:03:11.425930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.620 [2024-12-06 18:03:11.425940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.620 [2024-12-06 18:03:11.437618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.620 [2024-12-06 18:03:11.438141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.620 [2024-12-06 18:03:11.438163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.620 [2024-12-06 18:03:11.438170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.620 [2024-12-06 18:03:11.438326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.620 [2024-12-06 18:03:11.438478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.620 [2024-12-06 18:03:11.438485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.620 [2024-12-06 18:03:11.438491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.620 [2024-12-06 18:03:11.438496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.620 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.620 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:23.620 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:23.620 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:23.620 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 [2024-12-06 18:03:11.450313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.880 [2024-12-06 18:03:11.450636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.880 [2024-12-06 18:03:11.450651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.880 [2024-12-06 18:03:11.450657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.880 [2024-12-06 18:03:11.450807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.880 [2024-12-06 18:03:11.450957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.880 [2024-12-06 18:03:11.450964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.880 [2024-12-06 18:03:11.450971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.880 [2024-12-06 18:03:11.450976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.880 [2024-12-06 18:03:11.462944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.880 [2024-12-06 18:03:11.463301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.880 [2024-12-06 18:03:11.463316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.880 [2024-12-06 18:03:11.463322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.880 [2024-12-06 18:03:11.463472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.880 [2024-12-06 18:03:11.463623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.880 [2024-12-06 18:03:11.463630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.880 [2024-12-06 18:03:11.463639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.880 [2024-12-06 18:03:11.463645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.880 [2024-12-06 18:03:11.471432] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.880 [2024-12-06 18:03:11.475600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.880 [2024-12-06 18:03:11.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.880 [2024-12-06 18:03:11.476176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.880 [2024-12-06 18:03:11.476185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.880 [2024-12-06 18:03:11.476351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.880 [2024-12-06 18:03:11.476505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.880 [2024-12-06 18:03:11.476513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.880 [2024-12-06 18:03:11.476519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.880 [2024-12-06 18:03:11.476525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.880 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.881 [2024-12-06 18:03:11.488201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.881 [2024-12-06 18:03:11.488739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.881 [2024-12-06 18:03:11.488770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.881 [2024-12-06 18:03:11.488780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.881 [2024-12-06 18:03:11.488946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.881 [2024-12-06 18:03:11.489104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.881 [2024-12-06 18:03:11.489112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.881 [2024-12-06 18:03:11.489118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.881 [2024-12-06 18:03:11.489124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.881 [2024-12-06 18:03:11.500802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.881 [2024-12-06 18:03:11.501393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.881 [2024-12-06 18:03:11.501424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.881 [2024-12-06 18:03:11.501438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.881 [2024-12-06 18:03:11.501605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.881 [2024-12-06 18:03:11.501759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.881 [2024-12-06 18:03:11.501766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.881 [2024-12-06 18:03:11.501772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.881 [2024-12-06 18:03:11.501779] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.881 Malloc0 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.881 [2024-12-06 18:03:11.513449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.881 [2024-12-06 18:03:11.513960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.881 [2024-12-06 18:03:11.513976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10ee780 with addr=10.0.0.2, port=4420 00:26:23.881 [2024-12-06 18:03:11.513982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ee780 is same with the state(6) to be set 00:26:23.881 [2024-12-06 18:03:11.514137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ee780 (9): Bad file descriptor 00:26:23.881 [2024-12-06 18:03:11.514289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.881 [2024-12-06 18:03:11.514295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.881 [2024-12-06 18:03:11.514301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.881 [2024-12-06 18:03:11.514306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.881 [2024-12-06 18:03:11.521890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.881 [2024-12-06 18:03:11.526105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.881 18:03:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3207531 00:26:23.881 [2024-12-06 18:03:11.592777] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:25.258 5199.50 IOPS, 20.31 MiB/s [2024-12-06T17:03:14.042Z] 6376.86 IOPS, 24.91 MiB/s [2024-12-06T17:03:14.980Z] 7228.38 IOPS, 28.24 MiB/s [2024-12-06T17:03:15.916Z] 7885.22 IOPS, 30.80 MiB/s [2024-12-06T17:03:16.850Z] 8419.80 IOPS, 32.89 MiB/s [2024-12-06T17:03:17.785Z] 8847.27 IOPS, 34.56 MiB/s [2024-12-06T17:03:18.735Z] 9215.75 IOPS, 36.00 MiB/s [2024-12-06T17:03:20.116Z] 9515.08 IOPS, 37.17 MiB/s [2024-12-06T17:03:21.054Z] 9782.79 IOPS, 38.21 MiB/s [2024-12-06T17:03:21.054Z] 10010.87 IOPS, 39.10 MiB/s 00:26:33.227 Latency(us) 00:26:33.227 [2024-12-06T17:03:21.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:33.227 Verification LBA range: start 0x0 length 0x4000 00:26:33.227 Nvme1n1 : 15.01 10011.65 39.11 12087.31 0.00 5774.64 566.61 15073.28 00:26:33.227 [2024-12-06T17:03:21.054Z] =================================================================================================================== 00:26:33.227 [2024-12-06T17:03:21.054Z] Total : 10011.65 39.11 12087.31 0.00 5774.64 566.61 15073.28 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.227 rmmod nvme_tcp 00:26:33.227 rmmod nvme_fabrics 00:26:33.227 rmmod nvme_keyring 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:33.227 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3209314 ']' 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3209314 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3209314 ']' 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3209314 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3209314 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3209314' 00:26:33.228 killing process with pid 3209314 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3209314 00:26:33.228 18:03:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3209314 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.228 18:03:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.768 00:26:35.768 real 0m25.138s 00:26:35.768 user 1m0.339s 00:26:35.768 sys 0m5.627s 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.768 ************************************ 00:26:35.768 END TEST nvmf_bdevperf 00:26:35.768 ************************************ 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.768 ************************************ 00:26:35.768 START TEST nvmf_target_disconnect 00:26:35.768 ************************************ 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:35.768 * Looking for test storage... 00:26:35.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.768 --rc genhtml_branch_coverage=1 00:26:35.768 --rc genhtml_function_coverage=1 00:26:35.768 --rc genhtml_legend=1 00:26:35.768 --rc geninfo_all_blocks=1 00:26:35.768 --rc geninfo_unexecuted_blocks=1 00:26:35.768 00:26:35.768 ' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.768 --rc genhtml_branch_coverage=1 00:26:35.768 --rc genhtml_function_coverage=1 00:26:35.768 --rc genhtml_legend=1 00:26:35.768 --rc geninfo_all_blocks=1 00:26:35.768 --rc geninfo_unexecuted_blocks=1 00:26:35.768 00:26:35.768 ' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.768 --rc genhtml_branch_coverage=1 00:26:35.768 --rc genhtml_function_coverage=1 00:26:35.768 --rc genhtml_legend=1 00:26:35.768 --rc geninfo_all_blocks=1 00:26:35.768 --rc geninfo_unexecuted_blocks=1 00:26:35.768 00:26:35.768 ' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:35.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.768 --rc genhtml_branch_coverage=1 00:26:35.768 --rc genhtml_function_coverage=1 00:26:35.768 --rc genhtml_legend=1 00:26:35.768 --rc geninfo_all_blocks=1 00:26:35.768 --rc geninfo_unexecuted_blocks=1 00:26:35.768 00:26:35.768 ' 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.768 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.769 18:03:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.054 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:41.055 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:41.055 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:41.055 Found net devices under 0000:31:00.0: cvl_0_0 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:41.055 Found net devices under 0000:31:00.1: cvl_0_1 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.055 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:26:41.056 00:26:41.056 --- 10.0.0.2 ping statistics --- 00:26:41.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.056 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:26:41.056 00:26:41.056 --- 10.0.0.1 ping statistics --- 00:26:41.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.056 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.056 ************************************ 00:26:41.056 START TEST nvmf_target_disconnect_tc1 00:26:41.056 ************************************ 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.056 [2024-12-06 18:03:28.771912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.056 [2024-12-06 18:03:28.771985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x605d00 with addr=10.0.0.2, port=4420 00:26:41.056 [2024-12-06 18:03:28.772017] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:41.056 [2024-12-06 18:03:28.772029] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:41.056 [2024-12-06 18:03:28.772038] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:41.056 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:41.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:41.056 Initializing NVMe Controllers 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:41.056 00:26:41.056 real 0m0.112s 00:26:41.056 user 0m0.056s 00:26:41.056 sys 0m0.055s 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:41.056 ************************************ 00:26:41.056 END TEST nvmf_target_disconnect_tc1 00:26:41.056 ************************************ 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.056 ************************************ 00:26:41.056 START TEST nvmf_target_disconnect_tc2 00:26:41.056 ************************************ 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:41.056 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3215698 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3215698 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3215698 ']' 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.057 18:03:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.057 [2024-12-06 18:03:28.874736] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:41.057 [2024-12-06 18:03:28.874783] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.316 [2024-12-06 18:03:28.960601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.316 [2024-12-06 18:03:29.000442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.316 [2024-12-06 18:03:29.000476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.316 [2024-12-06 18:03:29.000484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.316 [2024-12-06 18:03:29.000491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.316 [2024-12-06 18:03:29.000497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.316 [2024-12-06 18:03:29.002408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:41.316 [2024-12-06 18:03:29.002564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:41.316 [2024-12-06 18:03:29.002715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:41.316 [2024-12-06 18:03:29.002716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.936 Malloc0 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:41.936 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.937 [2024-12-06 18:03:29.715665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.937 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.276 [2024-12-06 18:03:29.743902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.276 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.277 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3216046 00:26:42.277 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:42.277 18:03:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.226 18:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3215698 00:26:44.227 18:03:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Read completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 Write completed with error (sct=0, sc=8) 00:26:44.227 starting I/O failed 00:26:44.227 [2024-12-06 18:03:31.770233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.227 [2024-12-06 18:03:31.770525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.770549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.770832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.770843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.771033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.771043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.771427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.771635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.771651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.771956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.771968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.772329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.772341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.772642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.772653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.772943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.772955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.773298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.773309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.773627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.773638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.773957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.773968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.774295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.774307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.774562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.774573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.774901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.774912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.775110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.775123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.775460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.775471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.775524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.775535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.775859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.775872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.776193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.776205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.776512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.776524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.776857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.776868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.777132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.777144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.777408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.777419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.777593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.777604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.227 [2024-12-06 18:03:31.777908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.227 [2024-12-06 18:03:31.777919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.227 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.778084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.778097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.778412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.778424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.778704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.778716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.779033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.779044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.779445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.779456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.779729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.779741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.780007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.780019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.780322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.780334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.780601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.780613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.780877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.780889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.781088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.781104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.781385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.781396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.781692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.781979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.781990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.782284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.782296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.782468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.782480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.782643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.782656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.782954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.782965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.783265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.783276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.783577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.783589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.783896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.783908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.784229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.784241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.784540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.784551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.784832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.784843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.785159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.785170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.785467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.785478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.785774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.785786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.786086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.786097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.786355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.786367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.787850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.787863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.788315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.788353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.788651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.788665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.788960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.788971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.789280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.789292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.789588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.789599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.789893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.790214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.790226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.228 [2024-12-06 18:03:31.790585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.228 [2024-12-06 18:03:31.790596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.228 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.790894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.790905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.791219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.791231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.791553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.791564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.791863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.791874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.792162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.792175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.792495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.792506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.792807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.792818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.793119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.793131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.793416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.793427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.793714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.793725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.794044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.794054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.794338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.794349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.794684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.794695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.795031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.795042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.795382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.795394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.795696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.795707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.796030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.796041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.796380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.796392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.796667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.796678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.796985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.796996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.797280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.797292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.797630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.797641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.797956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.798320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.798331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.798515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.798526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.798706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.798719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.799033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.799044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.799364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.799376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.799676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.799686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.799964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.799975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.800276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.800287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.800596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.800607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.800771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.800783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.801094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.801110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.801404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.801415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.801695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.801706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.801999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.802010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.229 [2024-12-06 18:03:31.802313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.229 [2024-12-06 18:03:31.802324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.229 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.802581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.802592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.802878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.802890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.803202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.803214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.803517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.803528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.803803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.803814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.804099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.804114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.804331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.804342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.804512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.804524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.804832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.804843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.805125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.805137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.805452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.805463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.805740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.805757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.806069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.806080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.806418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.806694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.806706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.806878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.806889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.807164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.807479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.807489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.807764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.807775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.808050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.808061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.808346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.808630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.808642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.808998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.809009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.809307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.809319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.809627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.809638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.809964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.809975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.810264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.810275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.810580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.810591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.810767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.810779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.811083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.811094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.811430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.811441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.811716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.811727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.812024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.812035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.812338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.812350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.812518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.812531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.812859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.812870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.813150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.813162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.813480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.230 [2024-12-06 18:03:31.813492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.230 qpair failed and we were unable to recover it. 00:26:44.230 [2024-12-06 18:03:31.813778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.813789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.814069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.814080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.814390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.814402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.814683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.814694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.815033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.815045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.815342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.815353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.815634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.815645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.815919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.815930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.816216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.816227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.816517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.816528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.816815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.816826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.817024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.817035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.817313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.817325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.817620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.817631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.817917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.817928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.818237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.818248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.818564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.818575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.818880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.818891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.819182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.819353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.819366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.819679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.819690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.819998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.820010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.820291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.820303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.820588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.820598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.820785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.820796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.821097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.821119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.821433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.821443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.821610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.821622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.821894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.821905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.822207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.822219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.822549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.822561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.231 [2024-12-06 18:03:31.822835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.231 [2024-12-06 18:03:31.822846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.231 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.823156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.823168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.823441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.823452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.823736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.823747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.823920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.823931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.824231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.824243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.824554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.824826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.824837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.825119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.825131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.825428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.825439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.825749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.825762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.826033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.826044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.826359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.826370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.826651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.826662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.827024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.827034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.827319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.827331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.827668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.827679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.827970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.827981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.828167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.828179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.828497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.828508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.828807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.828818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.829091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.829111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.829428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.829439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.829734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.829745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.830021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.830032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.830322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.830333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.830607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.830618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.830889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.830900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.831173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.831184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.831466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.831477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.831750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.831761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.832038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.832049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.832345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.832357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.832665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.832676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.832952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.832963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.833261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.833272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.833581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.833593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.833802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.833815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.834119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.232 [2024-12-06 18:03:31.834130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.232 qpair failed and we were unable to recover it. 00:26:44.232 [2024-12-06 18:03:31.834325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.834337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.834652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.834663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.834939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.834951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.835221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.835233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.835548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.835560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.835834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.835846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.836154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.836166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.836443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.836454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.836736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.836747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.837076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.837087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.837406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.837418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.837595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.837606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.837926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.837937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.838227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.838238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.838630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.838641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.838938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.838949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.839219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.839230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.839454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.839465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.839724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.839735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.840005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.840016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.840220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.840231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.840525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.840536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.840813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.840824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.841116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.841128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.841434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.841445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.841748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.841763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.842056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.842067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.842348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.842359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.842638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.842649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.842946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.842957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.843151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.843162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.843446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.843457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.843791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.843803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.844098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.844113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.844446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.844457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.844744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.844755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.844938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.844950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.845244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.233 [2024-12-06 18:03:31.845256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.233 qpair failed and we were unable to recover it. 00:26:44.233 [2024-12-06 18:03:31.845562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.845573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.845849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.845860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.846153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.846165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.846452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.846463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.846661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.846673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.846973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.846984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.847266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.847277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.847595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.847606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.847884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.847895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.848178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.848190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.848462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.848473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.848751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.848763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.849060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.849071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.849359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.849370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.849677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.849689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.850018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.850030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.850323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.850334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.850654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.850666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.850955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.850966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.851242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.851253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.851542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.851553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.851839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.851850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.852121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.852132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.852454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.852660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.852671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.853023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.853034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.853244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.853256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.853448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.853460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.853745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.853756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.853942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.853953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.854252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.854265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.854534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.854544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.854818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.854829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.855110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.855121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.855433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.855444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.855712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.855722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.856012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.856023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.856324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.856335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.234 qpair failed and we were unable to recover it. 00:26:44.234 [2024-12-06 18:03:31.856605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.234 [2024-12-06 18:03:31.856616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.856789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.856800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.857178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.857189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.857493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.857504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.857780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.857791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.858060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.858071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.858363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.858375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.858662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.858673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.858950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.858961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.859252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.859607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.859890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.859901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.860066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.860078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.860358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.860370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.860639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.860650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.861017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.861028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.861324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.861335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.861625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.861638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.861921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.861932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.862215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.862226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.862525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.862536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.862831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.862842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.863170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.863181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.863453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.863464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.863774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.863785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.864106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.864118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.864422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.864433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.864718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.864730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.865067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.865078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.865362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.865373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.865712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.865722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.866054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.866065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.866358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.866369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.866642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.866653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.866930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.866942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.867226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.867237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.867532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.867824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.867835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.235 [2024-12-06 18:03:31.868111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.235 [2024-12-06 18:03:31.868122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.235 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.868424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.868435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.868759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.868769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.869043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.869054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.869125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.869136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.869419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.869430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.869713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.869726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.870007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.870018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.870323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.870335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.870654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.870665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.870959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.870970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.871257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.871268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.871584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.871595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.871878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.871889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.872193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.872205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.872513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.872524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.872793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.872804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.873108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.873119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.873385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.873397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.873676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.873687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.874016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.874027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.874335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.874346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.874520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.874532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.874829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.874840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.875012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.875023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.875320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.875331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.875663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.875674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.876061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.876072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.876373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.876385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.876690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.876701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.876972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.876983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.877287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.877298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.877567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.877578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.877874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.236 [2024-12-06 18:03:31.877885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.236 qpair failed and we were unable to recover it. 00:26:44.236 [2024-12-06 18:03:31.878189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.878200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.878368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.878381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.878675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.878685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.879064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.879074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.879369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.879380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.879673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.879684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.879960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.879971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.880245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.880256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.880554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.880564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.880748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.880760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.881062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.881073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.881364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.881375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.881645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.881656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.881836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.881849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.882183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.882195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.882490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.882501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.882692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.882703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.883025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.883036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.883328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.883340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.883617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.883628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.883951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.883963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.884240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.884251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.884557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.884567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.884848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.884859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.885138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.885149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.885475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.885486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.885669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.885681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.885895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.885906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.886196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.886208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.886512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.886524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.886808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.886820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.887099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.887113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.887395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.887407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.887712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.887722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.888022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.888033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.888324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.888335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.888652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.888663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.888823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.888835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.237 qpair failed and we were unable to recover it. 00:26:44.237 [2024-12-06 18:03:31.889135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.237 [2024-12-06 18:03:31.889147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.889439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.889451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.889725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.889738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.890062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.890073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.890368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.890381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.890682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.890693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.890978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.890989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.891264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.891276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.891560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.891571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.891857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.891868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.892141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.892152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.892454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.892465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.892748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.892759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.893058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.893069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.893358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.893369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.893654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.893665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.893933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.893945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.894225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.894237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.894544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.894555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.894828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.894840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.895143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.895154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.895442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.895454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.895772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.895783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.896057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.896068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.896374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.896385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.896671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.896682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.896963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.896974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.897248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.897259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.897442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.897454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.897758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.897771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.898094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.898109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.898432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.898443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.898728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.898739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.899020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.899031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.899318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.899329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.899615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.899627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.899931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.899942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.900230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.900240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.238 [2024-12-06 18:03:31.900550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.238 [2024-12-06 18:03:31.900561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.238 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.900847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.900858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.901145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.901157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.901469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.901480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.901769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.901780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.902121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.902133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.902428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.902439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.902767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.902778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.903060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.903072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.903369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.903380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.903669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.903680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.903966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.903976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.904301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.904312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.904663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.904967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.904978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.905262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.905273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.905574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.905585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.905868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.905879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.906154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.906168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.906467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.906763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.906774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.907061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.907073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.907394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.907406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.907709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.907720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.908011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.908022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.908320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.908331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.908541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.908552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.908850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.908861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.909139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.909150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.909335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.909347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.909686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.909697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.909989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.910000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.910318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.910329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.910673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.910685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.910974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.910985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.911264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.911275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.911575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.911586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.911873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.911885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.912162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.239 [2024-12-06 18:03:31.912173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.239 qpair failed and we were unable to recover it. 00:26:44.239 [2024-12-06 18:03:31.912474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.912486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.912679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.912690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.913031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.913042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.913204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.913216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.913532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.913543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.913829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.913840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.914118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.914129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.914507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.914518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.914824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.914834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.915116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.915128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.915404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.915415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.915696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.915707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.915991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.916001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.916293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.916305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.916606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.916617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.916889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.916900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.917192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.917204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.917500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.917511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.917843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.917853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.918033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.918044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.918345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.918356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.918630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.918641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.918940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.918951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.919255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.919266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.919571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.919583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.919863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.919874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.920151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.920162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.920471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.920482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.920791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.920802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.921075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.921085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.921387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.921399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.921679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.921690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.921964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.921975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.922304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.240 [2024-12-06 18:03:31.922315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.240 qpair failed and we were unable to recover it. 00:26:44.240 [2024-12-06 18:03:31.922611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.922622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.922809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.922819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.923130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.923141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.923450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.923460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.923748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.923759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.924057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.924068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.924377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.924388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.924703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.924714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.925062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.925073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.925366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.925377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.925562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.925574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.925834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.925845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.926163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.926174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.926464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.926479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.926766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.926777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.927083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.927094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.927419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.927430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.927767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.927778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.927962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.927973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.928276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.928287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.928616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.928912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.928922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.929198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.929209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.929535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.929546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.929833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.929844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.930177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.930188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.930499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.930510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.930811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.930823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.930987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.930997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.931299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.931310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.931592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.931603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.931926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.931937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.932230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.932241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.932549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.932561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.932834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.932845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.933188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.933200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.933367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.933378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.933704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.933715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.241 qpair failed and we were unable to recover it. 00:26:44.241 [2024-12-06 18:03:31.933998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.241 [2024-12-06 18:03:31.934008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.934309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.934323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.934470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.934484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.934801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.935091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.935142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.935442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.935454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.935768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.935779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.936058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.936069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.936361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.936372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.936665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.936948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.936959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.937265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.937276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.937561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.937573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.937856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.937867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.938150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.938161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.938446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.938457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.938737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.938748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.939033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.939043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.939376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.939387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.939664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.939675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.939955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.939966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.940263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.940274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.940561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.940572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.940741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.940753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.940925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.940936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.941220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.941231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.941542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.941553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.941716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.941727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.942029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.942040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.942335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.942346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.942623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.942634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.942917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.942928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.943133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.943145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.943430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.943441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.943729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.943740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.943897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.943909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.944225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.944544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.944555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.242 [2024-12-06 18:03:31.944846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.242 [2024-12-06 18:03:31.944857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.242 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.945119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.945130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.945452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.945463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.945783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.945794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.946062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.946073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.946353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.946365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.946672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.946683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.946962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.946973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.947164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.947175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.947503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.947514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.947792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.947803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.948089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.948103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.948402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.948413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.948703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.948714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.949002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.949013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.949297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.949309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.949616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.949627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.949898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.949909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.950189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.950200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.950492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.950503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.950783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.950794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.951063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.951074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.951378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.951389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.951672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.951683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.951984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.951995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.952313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.952324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.952601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.952612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.952892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.952903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.953171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.953183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.953499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.953510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.953783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.953795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.954124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.954135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.954432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.954446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.954718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.954729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.955008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.955019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.955322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.955333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.955664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.955675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.956017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.956028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.956257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.243 [2024-12-06 18:03:31.956269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.243 qpair failed and we were unable to recover it. 00:26:44.243 [2024-12-06 18:03:31.956587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.956598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.956891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.956902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.957090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.957105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.957303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.957314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.957586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.957597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.957896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.957907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.958183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.958195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.958493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.958505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.958811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.958822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.959089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.959103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.959281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.959293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.959595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.959606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.959881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.959893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.960162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.960173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.960469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.960481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.960751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.960762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.961060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.961071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.961357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.961369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.961671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.961682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.961971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.961982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.962162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.962175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.962486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.962497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.962775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.962786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.963112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.963123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.963300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.963312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.963623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.963634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.963916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.963927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.964252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.964264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.964568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.964579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.964876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.964887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.965159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.965170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.965480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.965492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.965777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.965788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.966072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.966083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.966387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.966399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.966685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.966696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.967035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.967046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.967241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.967253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.244 qpair failed and we were unable to recover it. 00:26:44.244 [2024-12-06 18:03:31.967453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.244 [2024-12-06 18:03:31.967465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.967765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.967776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.968071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.968082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.968370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.968382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.968656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.968666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.968950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.968961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.969110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.969121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.969316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.969327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.969594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.969605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.969794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.969807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.970118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.970129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.970457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.970469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.970771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.970782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.971062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.971073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.971367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.971379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.971690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.971701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.972038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.972050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.972253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.972265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.972466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.972477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.972781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.972793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.973092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.973106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.973381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.973393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.973681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.973692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.973977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.973989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.974306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.974317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.974599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.974610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.974905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.974916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.975199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.975211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.975506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.975518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.975818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.975829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.976112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.976124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.976445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.976456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.976735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.976746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.977032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.245 [2024-12-06 18:03:31.977043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.245 qpair failed and we were unable to recover it. 00:26:44.245 [2024-12-06 18:03:31.977307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.977318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.977585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.977596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.977921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.977932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.978239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.978251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.978517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.978528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.978809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.978820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.979092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.979107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.979441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.979452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.979732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.979743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.980017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.980028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.980333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.980344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.980678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.980689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.980965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.980976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.981259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.981270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.981557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.981568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.981836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.981846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.982128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.982139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.982332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.982342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.982514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.982526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.982837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.982848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.983173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.983184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.983485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.983496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.983800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.983810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.984097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.984117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.984436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.984446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.984763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.984774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.985060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.985072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.985398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.985410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.985730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.985742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.986028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.986038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.986342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.986354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.986635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.986646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.987002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.987014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.987316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.987328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.987603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.987614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.987921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.987932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.988121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.988133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.988447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.988459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.246 qpair failed and we were unable to recover it. 00:26:44.246 [2024-12-06 18:03:31.988778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.246 [2024-12-06 18:03:31.988788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.989064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.989075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.989352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.989363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.989651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.989663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.989955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.989966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.990246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.990259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.990586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.990597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.990864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.990876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.991150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.991161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.991450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.991461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.991739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.991750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.992026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.992038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.992312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.992323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.992592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.992603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.992782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.992795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.992975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.992986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.993312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.993324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.993630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.993641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.993934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.993945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.994222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.994234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.994589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.994600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.994923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.994934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.995216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.995228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.995512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.995523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.995804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.995815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.996164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.996175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.996444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.996455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.996753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.996764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.997044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.997055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.997385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.997397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.997688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.997699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.997986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.997996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.998302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.998316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.998558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.998569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.998850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.998861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.999187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.999199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.999503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.999514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:31.999866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:31.999877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.247 [2024-12-06 18:03:32.000177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.247 [2024-12-06 18:03:32.000189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.247 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.000485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.000496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.000679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.000690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.000997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.001008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.001272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.001284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.001597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.001609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.001915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.001925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.002219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.002231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.002605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.002616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.002886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.002897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.003169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.003180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.003458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.003468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.003753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.003763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.004050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.004061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.004356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.004368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.004666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.004677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.004949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.004961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.005291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.005302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.005597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.005608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.005911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.005921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.006194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.006206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.006496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.006507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.006785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.006796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.007067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.007078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.007391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.007403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.007689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.007700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.007976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.007987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.008305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.008316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.008615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.008626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.008943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.008953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.009257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.009268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.009647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.009658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.009960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.009971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.010260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.010271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.010550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.010561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.010842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.010854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.011139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.011150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.011439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.011450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.248 qpair failed and we were unable to recover it. 00:26:44.248 [2024-12-06 18:03:32.011731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.248 [2024-12-06 18:03:32.011742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.012037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.012048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.012351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.012363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.012679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.012689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.012979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.012990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.013262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.013273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.013570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.013582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.013881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.013892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.014173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.014184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.014462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.014473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.014767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.014778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.015081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.015093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.015422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.015434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.015622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.015633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.015944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.015955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.016262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.016274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.016559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.016571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.016853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.016864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.017028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.017040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.017259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.017270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.017574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.017586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.017864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.017875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.018206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.018218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.018537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.018548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.018884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.018898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.019262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.019273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.019554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.019565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.019855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.019866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.020163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.020174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.020473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.020484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.020763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.020775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.021060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.021073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.021394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.021705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.021716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.022026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.022037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.022325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.249 [2024-12-06 18:03:32.022336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.249 qpair failed and we were unable to recover it. 00:26:44.249 [2024-12-06 18:03:32.022640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.022651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.022948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.022960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.023260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.023271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.023546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.023557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.023898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.024198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.024210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.024518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.024529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.024813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.024825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.025004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.025016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.025288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.025300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.025603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.025616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.025908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.025919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.026215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.026227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.026530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.026541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.026855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.026867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.027186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.027200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.027507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.027518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.027809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.027820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.028119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.028130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.028453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.028464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.028649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.028662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.028936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.028948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.029282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.029293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.029606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.029618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.029995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.030006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.030324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.030336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.030615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.030626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.030925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.030935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.031271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.031283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.031592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.031604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.031847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.031859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.032168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.032180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.032476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.032489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.032766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.032777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.033056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.033067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.033367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.250 [2024-12-06 18:03:32.033379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.250 qpair failed and we were unable to recover it. 00:26:44.250 [2024-12-06 18:03:32.033669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.033681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.033962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.033973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.034264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.034276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.034576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.034587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.034867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.034880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.035166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.035178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.035475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.035488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.035828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.035839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.036134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.036145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.036449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.036460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.036752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.036762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.037048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.037060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.037355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.037367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.037663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.037675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.251 [2024-12-06 18:03:32.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.251 [2024-12-06 18:03:32.037983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.251 qpair failed and we were unable to recover it. 00:26:44.531 [2024-12-06 18:03:32.038191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-12-06 18:03:32.038205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-12-06 18:03:32.038519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-12-06 18:03:32.038531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-12-06 18:03:32.038853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-12-06 18:03:32.038864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-12-06 18:03:32.039164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-12-06 18:03:32.039176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-12-06 18:03:32.039470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.531 [2024-12-06 18:03:32.039481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.531 qpair failed and we were unable to recover it. 00:26:44.531 [2024-12-06 18:03:32.039773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.039786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.040092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.040106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.040392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.040403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.040587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.040599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.040965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.040975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.041303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.041315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.041610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.041621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.041897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.041908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.042113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.042125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.042408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.042419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.042585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.042596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.042896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.042907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.043229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.043241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.043535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.043547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.043808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.043821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.044139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.044150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.044439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.044450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.044724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.044735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.045034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.045045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.045372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.045383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.045682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.045694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.045992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.046003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.046280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.046292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.046608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.046619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.046907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.046919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.047195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.047207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.047511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.047522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.047857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.047869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.048150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.048162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.048459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.048471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.048743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.048755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.049044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.049056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.049350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.049362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.049634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.049646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.049976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.049986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.050264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.050276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.050554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.532 [2024-12-06 18:03:32.050565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.532 qpair failed and we were unable to recover it. 00:26:44.532 [2024-12-06 18:03:32.050880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.050892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.051203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.051215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.051493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.051505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.051790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.051802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.052080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.052091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.052392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.052404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.052691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.052702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.053029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.053042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.053338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.053350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.053675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.053687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.053973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.053984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.054254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.054265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.054585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.054596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.054890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.054901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.055211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.055222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.055521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.055532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.055833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.055845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.056198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.056212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.056491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.056501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.056800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.056813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.057112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.057124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.057445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.057456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.057720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.057732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.058030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.058042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.058347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.058359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.058654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.058666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.058926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.058938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.059245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.059257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.059445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.059457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.059744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.059757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.060052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.060064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.060408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.060420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.060751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.060762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.061047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.061058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.061242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.061254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.061610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.061622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.061944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.061956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.062257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.533 [2024-12-06 18:03:32.062269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.533 qpair failed and we were unable to recover it. 00:26:44.533 [2024-12-06 18:03:32.062594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.062606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.062903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.062915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.063271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.063283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.063592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.063604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.063910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.063921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.064218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.064230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.064544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.064559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.064886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.064897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.065188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.065199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.065385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.065397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.065702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.065714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.065991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.066003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.066262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.066273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.066584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.066595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.066890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.066901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.067162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.067175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.067507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.067518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.067848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.067858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.068157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.068170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.068489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.068500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.068793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.068804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.069113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.069126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.069268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.069280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.069575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.069586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.069931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.069941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.070250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.070261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.070490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.070501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.070800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.070811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.071118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.071130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.071451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.071462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.071762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.071773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.072073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.072085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.072297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.072309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.072602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.072613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.072919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.072930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.073232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.073244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.073595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.073606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.073909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.073921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.534 [2024-12-06 18:03:32.074111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.534 [2024-12-06 18:03:32.074123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.534 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.074404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.074415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.074684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.074695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.074979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.074990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.075272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.075283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.075575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.075586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.075855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.075866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.076156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.076168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.076482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.076493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.076757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.076769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.077033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.077044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.077377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.077389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.077686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.077697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.077959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.077970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.078265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.078276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.078561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.078572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.078833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.078844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.079172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.079183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.079463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.079474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.079739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.079750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.080027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.080039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.080336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.080348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.080620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.080630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.080903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.080914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.081177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.081189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.081495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.081507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.081691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.081703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.082026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.082037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.082350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.082361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.082623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.082635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.082908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.082919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.083247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.083258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.083529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.083540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.083814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.083825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.084105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.084117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.084286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.084298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.084506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.084519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.084832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.084843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.535 qpair failed and we were unable to recover it. 00:26:44.535 [2024-12-06 18:03:32.085120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.535 [2024-12-06 18:03:32.085132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.085511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.085522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.085831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.085842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.086140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.086151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.086344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.086355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.086633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.086644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.086984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.086995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.087154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.087165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.087451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.087462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.087733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.087744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.088024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.088035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.088306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.088318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.088591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.088602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.088876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.088887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.089170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.089182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.089462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.089473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.089742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.090050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.090061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.090358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.090369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.090639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.090650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.090925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.090936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.091105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.091116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.091416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.091426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.091724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.091735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.092022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.092034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.092327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.092341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.092611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.092622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.092907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.092918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.093193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.093503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.093514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.093782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.093794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.094067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.536 [2024-12-06 18:03:32.094078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.536 qpair failed and we were unable to recover it. 00:26:44.536 [2024-12-06 18:03:32.094372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.094581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.094592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.094883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.094894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.095344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.095355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.095542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.095552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.095864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.095875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.096186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.096197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.096516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.096527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.096818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.096829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.097105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.097117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.097425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.097436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.097710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.097721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.098006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.098017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.098313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.098324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.098618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.098629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.098926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.098937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.099221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.099233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.099534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.099545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.099878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.099889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.100189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.100200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.100530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.100543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.100838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.100849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.101054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.101064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.101365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.101376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.101710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.101721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.101994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.102005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.102316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.102327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.102600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.102612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.102931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.102943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.103212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.103224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.103502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.103513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.103787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.103798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.104085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.104096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.104327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.104625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.104636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.104914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.104925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.105222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.105234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.105422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.105433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.537 qpair failed and we were unable to recover it. 00:26:44.537 [2024-12-06 18:03:32.105748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.537 [2024-12-06 18:03:32.105759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.105963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.105974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.106300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.106311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.106588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.106599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.106886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.107199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.107210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.107508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.107519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.107766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.107777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.108057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.108069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.108373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.108385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.108690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.108701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.108969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.108981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.109263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.109274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.109555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.109566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.109879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.109890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.110169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.110181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.110482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.110492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.110768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.110780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.110966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.110978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.111285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.111295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.111604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.111615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.111919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.111931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.112214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.112225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.112424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.112438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.112756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.112766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.113043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.113054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.113355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.113366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.113675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.113686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.114000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.114011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.114307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.114318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.114592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.114604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.114880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.114891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.115169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.115383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.115394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.115695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.115707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.115985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.115996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.116307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.116319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.116597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.116608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.538 [2024-12-06 18:03:32.116941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.538 [2024-12-06 18:03:32.116952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.538 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.117241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.117252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.117563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.117575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.117935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.117946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.118277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.118288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.118570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.118581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.118892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.118904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.119200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.119211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.119551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.119562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.119858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.119869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.120151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.120162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.120479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.120786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.120799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.121074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.121085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.121471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.121483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.121788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.121800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.122143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.122155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.122460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.122748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.122758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.123031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.123042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.123353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.123365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.123643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.123654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.123935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.123947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.124229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.124241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.124546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.124558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.124866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.124877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.125186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.125495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.125506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.125789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.125800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.126080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.126090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.126377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.126388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.126663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.126674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.126959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.126970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.127263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.127275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.127565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.127576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.127872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.127883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.128172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.128183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.128501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.128512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.128825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.539 [2024-12-06 18:03:32.128836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.539 qpair failed and we were unable to recover it. 00:26:44.539 [2024-12-06 18:03:32.129172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.129186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.129479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.129490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.129778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.129789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.130069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.130080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.130385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.130397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.130679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.130690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.130980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.130991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.131263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.131590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.131601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.131888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.131899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.132221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.132233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.132559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.132571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.132843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.132855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.133138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.133150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.134230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.134255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.134643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.134655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.134843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.134855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.135185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.135197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.135542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.135553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.135854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.135865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.136149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.136160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.136452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.136463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.136771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.136782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.137072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.137083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.137428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.137440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.137715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.137727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.138032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.138043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.138340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.138351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.138582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.138593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.138881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.138892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.139094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.139110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.139436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.139447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.139736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.139747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.140107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.140118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.140471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.140481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.140791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.140802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.141115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.141127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.141391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.141402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.540 [2024-12-06 18:03:32.141736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.540 [2024-12-06 18:03:32.141747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.540 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.142028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.142039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.142393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.142404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.142740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.142752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.143055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.143066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.143359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.143370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.143684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.143695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.143977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.143988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.144307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.144318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.144514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.144525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.144795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.145102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.145114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.145408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.145419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.145699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.145710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.145998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.146009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.146316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.146327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.146635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.146646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.146959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.146970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.147164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.147176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.147449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.147460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.147739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.147750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.148057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.148068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.148371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.148383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.148654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.148665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.148954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.148965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.149240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.149253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.149556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.149567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.149846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.149858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.150147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.150158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.150350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.150361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.150672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.150687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.150988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.151305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.151317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.541 [2024-12-06 18:03:32.151646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.541 [2024-12-06 18:03:32.151656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.541 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.151839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.151850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.152120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.152132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.152437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.152448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.152733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.152744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.153021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.153032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.153378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.153389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.153576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.153587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.153910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.153921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.154195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.154207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.154510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.154521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.154804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.154815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.155106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.155117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.155435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.155446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.155730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.155741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.156080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.156091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.156387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.156398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.156691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.156702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.156996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.157007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.157305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.157317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.157650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.157661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.157941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.157952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.158236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.158248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.158572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.158583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.158844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.158857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.159028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.159040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.159341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.159352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.159615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.159626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.159903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.159914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.160191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.160203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.160534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.160546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.160849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.160860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.161142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.161153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.161517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.161528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.161812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.161823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.162171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.162183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.162456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.162467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.162748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.162759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.163099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.542 [2024-12-06 18:03:32.163113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.542 qpair failed and we were unable to recover it. 00:26:44.542 [2024-12-06 18:03:32.163444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.163455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.163731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.163742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.163914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.163925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.164243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.164255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.164571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.164582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.164888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.164899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.165239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.165251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.165537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.165548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.165832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.165844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.166139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.166151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.166459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.166470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.166777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.166789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.167065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.167078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.167377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.167388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.167665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.167677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.167865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.167875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.168082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.168093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.168395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.168406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.168711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.168722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.169011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.169022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.169308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.169319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.169626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.169637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.169926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.169938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.170228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.170239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.170549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.170560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.170834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.170845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.171133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.171457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.171468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.171741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.171752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.172035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.172046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.172339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.172350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.172626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.172637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.172910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.172921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.173202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.173214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.173523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.173534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.173818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.173829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.174139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.174151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.174475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.174486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.543 [2024-12-06 18:03:32.174764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.543 [2024-12-06 18:03:32.174775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.543 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.175063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.175074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.175428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.175440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.175759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.175770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.176077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.176089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.176385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.176397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.176689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.176700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.177001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.177011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.177315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.177603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.177614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.177905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.177916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.178220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.178232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.178542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.178553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.178847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.178858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.179158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.179169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.179468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.179481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.179770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.179781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.180080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.180090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.180421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.180433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.180711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.180722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.181020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.181030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.181331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.181342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.181625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.181636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.181907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.181918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.182192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.182204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.182499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.182510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.182784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.182794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.183422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.183433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.183729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.183740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.184043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.184055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.184251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.184262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.184568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.184580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.184884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.184896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.185171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.185183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.185345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.185357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.185681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.185692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.185876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.185887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.186199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.544 [2024-12-06 18:03:32.186211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.544 qpair failed and we were unable to recover it. 00:26:44.544 [2024-12-06 18:03:32.186415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.186426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.186738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.186749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.187080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.187091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.187292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.187305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.187626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.187638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.187934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.187945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.188235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.188245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.188669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.188682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.188973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.188986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.189288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.189299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.189610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.189621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.189896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.189907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.190181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.190192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.190471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.190482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.190771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.190782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.191061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.191072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.191358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.191370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.191643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.191655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.191927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.191938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.192265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.192276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.192561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.192572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.192847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.192858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.193148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.193160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.193462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.193474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.193703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.193715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.194008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.194018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.194322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.194334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.194639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.194650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.194926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.194937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.195214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.195226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.195515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.195530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.195859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.195870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.196150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.196162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.545 qpair failed and we were unable to recover it. 00:26:44.545 [2024-12-06 18:03:32.196454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.545 [2024-12-06 18:03:32.196465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.196795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.196807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.197113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.197125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.197401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.197412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.197715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.197726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.198004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.198016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.198306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.198318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.198504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.198516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.198834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.198845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.198924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.199085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.199096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.199396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.199408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.199733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.199744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.200021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.200032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.200345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.200356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.200630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.200641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.200926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.200937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.201222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.201233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.201512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.201524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.201794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.201805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.202046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.202057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.202360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.202371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.202658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.202669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.202996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.203006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.203289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.203300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.203593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.203604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.203901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.203912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.204189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.204201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.204497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.204508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.204777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.204788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.205077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.205088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.205398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.205410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.205689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.205700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.206027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.206039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.206246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.206257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.206566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.206577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.206883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.206895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.207198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.207210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.546 [2024-12-06 18:03:32.207504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.546 [2024-12-06 18:03:32.207515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.546 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.207843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.207854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.208142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.208153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.208468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.208480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.208765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.208775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.209058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.209069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.209362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.209373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.209679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.209691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.210005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.210016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.210305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.210316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.210653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.210664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.210942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.210952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.211237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.211249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.211524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.211534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.211821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.211833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.212112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.212123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.212443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.212454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.212755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.212766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.213042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.213053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.213383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.213395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.213679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.213690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.214001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.214012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.214328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.214340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.214631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.214642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.214932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.214943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.215227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.215239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.215610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.215890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.215903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.216084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.216096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.216426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.216437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.216740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.216751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.217041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.217051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.217348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.217359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.217662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.217673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.217956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.217967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.218254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.218266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.218604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.218615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.218916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.218927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.219205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.547 [2024-12-06 18:03:32.219216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.547 qpair failed and we were unable to recover it. 00:26:44.547 [2024-12-06 18:03:32.219517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.219528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.219711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.219722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.220061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.220072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.220385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.220396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.220684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.220695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.220984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.220996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.221262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.221274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.221566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.221576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.221908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.221919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.222221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.222233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.222549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.222560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.222841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.222852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.223186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.223198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.223491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.223503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.223837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.223849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.224129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.224143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.224439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.224450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.224802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.224813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.225142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.225153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.225458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.225469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.225740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.225751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.226021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.226032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.226336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.226348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.226615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.226626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.226900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.226911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.227196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.227207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.227508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.227519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.227796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.227808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.228139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.228150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.228464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.228475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.228780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.228791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.229111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.229122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.229411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.229421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.229693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.229704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.229984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.229995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.230262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.230274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.230547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.230559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.230837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.230848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.548 [2024-12-06 18:03:32.231119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.548 [2024-12-06 18:03:32.231133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.548 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.231486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.231497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.231794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.231805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.232076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.232087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.232462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.232476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.232764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.232775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.233017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.233028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.233336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.233347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.233640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.233651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.233946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.233957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.234241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.234252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.234574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.234849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.234859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.235141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.235152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.235418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.235429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.235625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.235636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.235954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.235965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.236271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.236282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.236570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.236582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.236863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.236874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.237154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.237165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.237457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.237468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.237744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.237755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.238051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.238062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.238246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.238258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.238562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.238573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.238901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.238912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.239078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.239089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.239399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.239411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.239689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.239700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.239982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.239993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.240166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.240177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.240552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.240563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.240832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.240843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.241128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.241140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.241412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.241423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.241716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.241727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.242011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.242021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.242313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.549 [2024-12-06 18:03:32.242325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.549 qpair failed and we were unable to recover it. 00:26:44.549 [2024-12-06 18:03:32.242595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.242607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.242909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.242920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.243206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.243497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.243508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.243788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.243799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.244076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.244088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.244426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.244441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.244739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.244751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.245033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.245044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.245375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.245387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.245672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.245683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.245963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.245973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.246305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.246316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.246591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.246602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.246880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.246891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.247194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.247205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.247370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.247382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.247703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.247715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.248033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.248044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.248252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.248572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.248582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.248895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.248906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.249192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.249203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.249500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.249511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.249793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.249804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.250138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.250150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.250461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.250472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.250751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.250762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.251067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.251078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.251413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.251424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.251755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.251766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.252043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.252054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.252354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.550 [2024-12-06 18:03:32.252365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.550 qpair failed and we were unable to recover it. 00:26:44.550 [2024-12-06 18:03:32.252556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.252568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.252879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.252890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.253164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.253176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.253491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.253502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.253776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.253787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.254061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.254072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.254395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.254407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.254739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.254750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.254959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.254970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.255268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.255279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.255578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.255588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.255864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.255875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.256251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.256263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.256534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.256546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.256859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.256870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.257110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.257122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.257446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.257458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.257735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.257746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.258025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.258036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.258240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.258251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.258546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.258558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.258839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.258850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.259126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.259137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.259452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.259463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.259769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.259781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.260090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.260104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.260382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.260394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.260733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.260746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.260933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.260944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.261258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.261269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.261561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.261572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.261885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.261897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.262172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.262184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.262390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.262402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.262702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.262713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.263048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.263059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.263330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.263342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.263623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.263634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.551 qpair failed and we were unable to recover it. 00:26:44.551 [2024-12-06 18:03:32.263908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.551 [2024-12-06 18:03:32.263919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.264233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.264245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.264549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.264560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.264871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.264882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.265175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.265187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.265495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.265506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.265778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.265789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.266078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.266089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.266431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.266442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.266744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.266755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.267105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.267116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.267406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.267417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.267760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.267771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.268098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.268114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.268402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.268412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.268738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.268749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.269055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.269066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.269374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.269386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.269685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.269696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.269982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.269993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.270279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.270290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.270614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.270625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.270909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.270921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.271257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.271269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.271541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.271553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.271854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.271865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.272174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.272185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.272504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.272515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.272809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.272821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.273112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.273123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.273429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.273440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.273749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.273761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.274531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.274555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.274871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.274884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.275207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.275221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.275417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.275429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.275717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.275729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.276010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.276022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.552 [2024-12-06 18:03:32.276342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.552 [2024-12-06 18:03:32.276354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.552 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.276661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.276672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.277250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.277270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.277581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.277594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.277898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.277911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.278131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.278145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.278424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.278435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.278736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.278749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.279029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.279040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.279366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.279378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.279669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.279682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.279990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.280698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.280719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.281032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.281043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.281239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.281250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.281536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.281547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.281884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.281895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.282170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.282183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.282491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.282502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.282686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.282702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.283023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.283035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.283286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.283298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.283649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.283661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.283970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.283982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.284200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.284213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.284497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.284508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.284831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.284842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.285161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.285173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.285481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.285493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.285769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.285781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.286076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.286087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.286402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.286415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.286509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.286519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.286823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.286835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.287150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.287162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.287497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.287865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.287876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.288209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.288221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.288505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.553 [2024-12-06 18:03:32.288516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.553 qpair failed and we were unable to recover it. 00:26:44.553 [2024-12-06 18:03:32.288827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.288839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.289163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.289175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.289571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.289582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.289872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.289884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.290173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.290525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.290537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.290699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.290711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.291006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.291020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.291341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.291354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.291629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.291641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.292010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.292021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.292341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.292352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.292522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.292535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.292852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.292863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.293149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.293161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.293352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.293364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.293554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.293566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.293760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.293963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.293973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.294278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.294291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.294416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.294426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.294611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.294624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.294910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.294922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.295097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.295114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.295479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.295491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.295757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.295768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.296059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.296070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.296400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.296411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.296693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.296705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.297003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.297014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.297342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.297354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.297636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.297647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.297927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.297939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.298336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.554 [2024-12-06 18:03:32.298348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.554 qpair failed and we were unable to recover it. 00:26:44.554 [2024-12-06 18:03:32.298650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.298662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.298947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.299266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.299278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.299584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.299596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.299889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.299900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.300212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.300223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.300524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.300535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.300839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.300850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.301153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.301165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.301523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.301535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.301719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.301730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.301920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.301932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.302110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.302122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.302248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.302258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.302577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.302589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.302885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.302897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.303218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.303231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.303531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.303542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.303837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.303849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.304153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.304164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.304472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.304484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.304778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.304790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.305096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.305119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.305424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.305435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.305739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.305750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.305943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.305954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.306236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.306248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.306550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.306562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.306872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.306883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.307210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.307222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.307497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.307508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.307815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.307827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.307994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.308006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.308192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.308205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.308505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.308516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.308806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.308818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.309117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.309351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.309362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.555 qpair failed and we were unable to recover it. 00:26:44.555 [2024-12-06 18:03:32.309639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.555 [2024-12-06 18:03:32.309650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.309922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.309934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.310126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.310139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.310408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.310422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.310705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.310717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.310978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.310990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.311184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.311196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.311374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.311384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.311707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.311719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.312014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.312026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.312345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.312358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.312643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.312654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.312991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.313002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.313299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.313311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.313597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.313608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.313888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.313898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.314213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.314225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.314402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.314413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.314630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.314642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.314848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.314859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.314998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.315010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.315217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.315229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.315545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.315556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.315867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.315878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.316044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.316055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.316206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.316217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.316512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.316523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.316792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.316803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.317105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.317116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.317419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.317430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.317711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.317725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.318052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.318063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.318399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.318410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.318720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.318732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.319028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.319039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.319376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.319387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.319683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.319694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.319844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.319856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.556 [2024-12-06 18:03:32.320158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.556 [2024-12-06 18:03:32.320170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.556 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.320464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.320475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.320631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.320643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.320815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.320826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.321008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.321019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.321391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.321403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.321707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.321718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.322004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.322015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.322316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.322328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.322611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.322622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.322803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.322814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.323076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.323087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.323393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.323405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.323736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.323747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.324055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.324066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.324243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.324255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.324438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.324449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.324754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.324765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.324957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.324968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.325189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.325204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.325408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.325419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.325612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.325624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.325930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.325941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.326255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.326266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.326574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.326585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.326889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.326900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.327102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.327113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.327460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.327471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.327790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.327800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.328099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.328114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.328423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.328434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.328720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.328731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.329067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.329078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.329421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.329433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.329652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.329662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.329975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.329987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.330301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.330313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.330608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.330619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.330908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.557 [2024-12-06 18:03:32.330919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.557 qpair failed and we were unable to recover it. 00:26:44.557 [2024-12-06 18:03:32.331231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.331242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.331525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.331536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.331833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.331844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.332145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.332156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.332457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.332468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.332774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.332785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.333072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.333275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.333287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.333562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.333573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.333876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.333887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.334169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.334181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.334483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.334494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.334776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.334787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.334989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.335000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.335311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.335322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.335481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.335493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.335823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.335833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.336140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.336151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.336572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.336584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.336899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.336910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.337227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.337239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.337528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.337539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.558 [2024-12-06 18:03:32.337873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.558 [2024-12-06 18:03:32.337884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.558 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.338265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.338278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.338445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.338457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.338766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.338778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.339046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.339057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.339367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.339378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.339665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.339676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.339973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.339984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.340271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.340282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.340472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.340484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.340852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.340863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.341198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.341209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.341501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.341512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.341838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.341849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.342010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.342022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.342327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.342338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.342635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.342646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.342921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.342932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.343156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.343167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.343495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.343506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.343881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.343892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.344212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.344223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.344520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.344533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.344719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.344730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.344901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.344912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.345224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.345235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.345528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.345541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.345831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.345842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.346109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.346120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.346441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.346452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.346737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.346748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.347050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.347062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.347408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.347419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.347633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.347830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.347841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.348174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.348185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.348551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.348562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.348744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.348755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.349027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.349038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.349260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.349271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.349578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.349589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.349891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.349902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.350211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.350222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.350506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.350517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.350814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.350825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.351027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.351038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.351253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.351265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.351554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.351565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.351870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.351881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.352084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.352094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.352389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.352399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.352697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.352706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.352892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.352902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.353071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.353083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.353293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.353303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.353628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.353640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.353920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.353931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.354231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.354243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.354429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.354442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.354655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.354667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.354825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.354837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.355127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.355140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.355461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.355473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.355853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.355865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.849 [2024-12-06 18:03:32.356154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.849 [2024-12-06 18:03:32.356166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.849 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.356484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.356496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.356811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.356823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.357110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.357121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.357340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.357521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.357533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.357709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.357720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.357806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.357818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.358028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.358039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.358124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.358136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.358468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.358479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.358786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.358797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.359105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.359117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.359422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.359433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.359605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.359617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.359928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.359939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.359987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.359999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.360244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.360256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.360589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.360601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.360903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.360915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.361234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.361529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.361541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.361821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.361833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.362135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.362147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.362339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.362350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.362654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.362665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.362957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.362969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.363303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.363315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.363593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.363604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.363909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.363920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.364283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.364295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.364443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.364455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.364770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.364781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.365097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.365113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.365385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.365396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.365589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.365600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.365888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.365899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.366230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.366242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.366540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.366551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.366720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.366732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.366901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.366912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.367231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.367243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.367595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.367606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.367919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.367930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.368350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.368687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.368698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.368869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.368881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.369181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.369193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.369523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.369535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.369888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.369900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.370198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.370210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.370321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.370333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.370636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.370647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.370947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.370958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.371146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.371157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.371512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.371523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.371831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.371842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.372128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.372144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.372335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.372346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.372687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.372699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.373088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.373099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.373448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.373459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.373649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.373660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.373986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.373997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.374337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.374349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.374635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.374646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.374945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.374956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.375281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.375292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.375578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.375589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.375927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.375938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.376186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.376198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.376417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.376428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.376550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.850 [2024-12-06 18:03:32.376562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.850 qpair failed and we were unable to recover it. 00:26:44.850 [2024-12-06 18:03:32.376742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.376754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.377035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.377047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.377416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.377689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.377700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.378017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.378028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.378334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.378346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.378587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.378598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.378902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.378913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.379087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.379098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.379304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.379316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.379598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.379609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.379766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.379779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.379887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.379896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.380259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.380270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.380577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.380588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.380843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.380854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.381127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.381138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.381337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.381348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.381709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.381720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.382053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.382064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.382270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.382281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.382571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.382582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.382917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.383253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.383265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.383560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.383571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.383849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.383860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.384051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.384062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.384355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.384367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.384545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.384557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.384730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.385037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.385049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.385396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.385407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.385705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.385716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.385973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.385984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.386233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.386245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.386658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.386669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.386975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.386986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.387161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.387174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.387456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.387469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.387775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.387786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.388077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.388088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.388481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.388753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.388764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.388935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.388946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.389212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.389223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.389387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.389398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.389596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.389606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.389907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.389918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.390086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.390097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.390408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.390419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.390699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.390710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.390993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.391004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.391145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.391156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.391462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.391473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.391789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.391800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.392108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.392120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.392411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.392422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.392728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.392739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.393021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.393032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.393307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.393319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.393607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.393619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.393892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.393903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.394113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.394124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.394495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.394507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.851 [2024-12-06 18:03:32.394771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.851 [2024-12-06 18:03:32.394781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.851 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.395063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.395074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.395446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.395458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.395749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.395759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.396083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.396094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.396417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.396428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.396738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.396749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.397028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.397039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.397381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.397393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.397779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.397790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.398083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.398094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.398340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.398351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.398649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.398660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.398812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.398824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.399197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.399208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.399539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.399717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.399728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.400019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.400030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.400381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.400392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.400590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.400601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.400888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.400899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.401199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.401211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.401484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.401495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.401800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.401811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.401969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.401979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.402282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.402293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.402585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.402596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.402887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.402898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.403254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.403265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.403542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.403553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.403836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.403847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.404149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.404160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.404447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.404459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.404658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.404669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.404840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.404851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.405035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.405046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.405285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.405297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.405494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.405505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.405819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.405831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.405987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.405999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.406394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.406405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.406695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.406706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.407012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.407025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.407396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.407407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.407702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.407714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.408020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.408031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.408463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.408475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.408759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.408770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.409050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.409061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.409392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.409404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.409584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.409595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.409780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.409791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.410094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.410115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.410449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.410727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.410738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.411010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.411022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.411339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.411351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.411691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.411702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.411989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.412000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.412351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.412362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.412654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.412665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.412945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.412956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.413284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.413296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.413496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.413508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.413789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.413801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.414082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.414093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.852 qpair failed and we were unable to recover it. 00:26:44.852 [2024-12-06 18:03:32.414433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.852 [2024-12-06 18:03:32.414444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.414727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.414738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.415038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.415049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.415326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.415339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.415630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.415641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.415911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.415922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.416244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.416256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.416565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.416577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.416853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.416864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.416996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.417006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.417376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.417387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.417681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.417692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.418000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.418403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.418414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.418691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.418702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.418877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.418889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.419131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.419142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.419372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.419383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.419688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.419699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.419963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.419974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.420305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.420456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.420465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.420757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.420768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.420948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.420958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.421225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.421236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.421536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.421547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.421705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.421716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.422095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.422109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.422405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.422416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.422707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.422718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.423006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.423018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.423342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.423353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.423642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.423653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.423951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.423962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.424179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.424190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.424468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.424479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.424742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.424753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.424927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.424938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.425219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.425230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.425381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.425391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.425636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.425648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.425930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.425940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.426383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.426395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.426681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.426692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.427002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.427013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.427386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.427397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.427780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.427791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.428068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.428079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.428432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.428446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.428730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.428741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.428938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.428948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.429326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.429336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.429650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.429660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.429823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.429832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.430145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.430156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.430414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.430425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.430579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.430588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.430909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.430919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.431257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.853 [2024-12-06 18:03:32.431268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.853 qpair failed and we were unable to recover it. 00:26:44.853 [2024-12-06 18:03:32.431543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.431554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.431842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.431853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.431934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.431944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.432200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.432210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.432585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.432595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.432797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.432807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.433089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.433106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.433337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.433348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.433702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.433712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.433972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.433982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.434314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.434325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.434519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.434528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.434818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.434830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.435191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.435201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.435529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.435538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.435722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.435732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.436009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.436020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.436385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.436395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.436671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.436680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.436964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.436974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.437328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.437338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.437625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.437635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.437800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.437810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.438081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.438091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.438421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.438431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.438730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.438741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.439067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.439078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.439467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.439478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.439761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.439771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.439997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.440006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.440246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.440258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.440572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.440582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.440886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.440897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.441307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.441610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.441620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.441955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.441965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.442311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.442322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.442624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.442634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.442954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.442964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.443336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.443350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.443647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.443657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.443932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.443941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.444300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.444311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.444635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.444645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.444903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.444912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.445223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.445233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.445559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.445568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.445832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.445841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.446140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.446151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.446370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.446380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.446692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.446702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.446972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.854 [2024-12-06 18:03:32.446982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.854 qpair failed and we were unable to recover it. 00:26:44.854 [2024-12-06 18:03:32.447285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.447296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.447579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.447589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.447909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.447919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.448145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.448156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.448477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.448488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.448784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.448795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.449122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.449133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.449475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.449485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.449763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.449773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.450112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.450122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.450389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.450399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.450598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.450608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.450902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.450912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.451121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.451132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.451332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.451345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.451686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.451696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.452014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.452024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.452362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.452652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.452661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.452991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.453001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.453297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.453307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.453604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.453615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.453910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.453920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.454261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.454271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.454620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.454630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.454960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.454970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.455267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.455277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.455570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.455580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.455868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.455878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.456167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.456177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.456480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.456490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.456792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.457124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.457431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.457442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.457732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.457741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.458029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.458039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.458341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.458351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.458648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.458658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.458961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.458971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.459291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.459301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.459635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.459645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.459847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.459857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.460165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.460176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.460371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.460380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.460699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.460709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.461020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.461030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.461338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.461348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.461654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.461663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.461998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.462008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.462354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.462365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.462557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.462567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.462884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.462893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.463184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.463194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.463605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.463616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.463949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.463958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.464300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.464311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.464680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.855 [2024-12-06 18:03:32.464689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.855 qpair failed and we were unable to recover it. 00:26:44.855 [2024-12-06 18:03:32.465004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.465015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.465288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.465299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.465622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.465632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.465772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.465782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.465932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.465942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.466217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.466227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.466536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.466546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.466896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.466907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.467228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.467238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.467565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.467576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.467861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.467871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.468281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.468291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.468589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.468893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.468902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.469219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.469230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.469529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.469538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.469822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.469831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.470155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.470166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.470524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.470533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.470832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.470842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.471125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.471136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.471437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.471447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.471839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.471848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.472140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.472150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.472471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.472481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.472761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.472773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.473056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.473067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.473406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.473416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.473701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.473712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.473998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.474008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.474294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.474304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.474629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.474638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.474911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.474921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.475216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.475226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.475508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.475518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.475810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.475820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.476144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.476154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.476446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.476456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.476820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.476831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.477030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.477041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.477380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.477391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.477682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.477692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.477973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.477983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.478269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.478279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.478579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.478876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.478886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.479176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.479187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.479494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.479504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.479766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.479776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.480106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.480116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.480432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.480442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.480724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.480734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.481069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.481386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.481398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.481689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.481699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.481983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.481993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.482281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.482292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.482568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.482578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.482899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.482909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.483198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.483208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.483534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.856 [2024-12-06 18:03:32.483543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.856 qpair failed and we were unable to recover it. 00:26:44.856 [2024-12-06 18:03:32.483829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.483839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.484129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.484139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.484451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.484461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.484747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.484756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.485073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.485083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.485389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.485400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.485691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.485701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.485993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.486002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.486170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.486181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.486505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.486515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.486738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.486748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.487106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.487118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.487434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.487443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.487733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.487743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.488035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.488044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.488372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.488383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.488694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.488704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.488987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.488997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.489349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.489359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.489692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.489703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.490096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.490109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.490449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.490459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.490783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.490792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.491080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.491091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.491291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.491301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.491627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.491637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.491925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.491935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.492103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.492116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.492400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.492410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.492697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.492707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.492989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.492998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.493280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.493290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.493474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.493484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.493816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.493825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.494168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.494179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.494452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.494462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.494796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.494805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.495097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.495111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.495500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.495510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.495800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.495809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.496146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.496156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.496442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.496452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.496740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.496750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.497074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.497084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.497448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.497458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.497758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.497768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.498044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.498054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.498361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.498371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.498652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.498662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.498832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.498842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.499126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.857 [2024-12-06 18:03:32.499136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.857 qpair failed and we were unable to recover it. 00:26:44.857 [2024-12-06 18:03:32.499461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.499471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.499754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.499764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.499933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.499943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.500215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.500225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.500499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.500509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.500798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.500808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.501119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.501130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.501477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.501486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.501801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.501812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.502014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.502023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.502208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.502218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.502568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.502579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.502895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.502905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.503265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.503275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.503588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.503598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.503880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.503889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.504169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.504180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.504480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.504491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.504784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.504794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.504992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.505001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.505298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.505308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.505700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.505710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.505905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.505915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.506228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.506238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.506547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.506558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.506888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.506898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.507176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.507186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.507490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.507499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.507791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.507800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.508082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.508092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.508341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.508647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.508657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.508946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.508956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.509239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.509249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.509581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.509591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.509765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.509778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.510066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.510076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.510290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.510300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.510615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.510624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.510960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.510970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.511280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.511291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.511615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.511625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.511907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.511917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.512220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.512230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.512505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.512515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.512809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.512819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.513110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.513315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.513651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.513661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.513956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.514331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.514341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.514638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.514648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.515001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.515012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.515304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.515315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.515606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.515615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.515909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.515918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.516230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.516240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.858 [2024-12-06 18:03:32.516517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.858 [2024-12-06 18:03:32.516526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.858 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.516856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.517147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.517158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.517492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.517501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.517822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.518112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.518124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.518405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.518415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.518728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.518739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.519042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.519052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.519358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.519368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.519666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.519675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.519958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.519968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.520255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.520265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.520558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.520568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.520761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.520771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.521096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.521115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.521437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.521449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.521651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.521662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.521978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.521987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.522266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.522277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.522570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.522580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.522874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.523160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.523171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.523481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.523491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.523771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.523781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.524097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.524113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.524411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.524422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.524726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.524736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.525034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.525046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.525353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.525364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.525653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.525664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.525975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.525986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.526272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.526282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.526583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.526594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.526884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.526894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.527179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.527189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.527504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.527515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.527838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.527848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.528163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.528173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.528482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.528493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.528800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.528811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.529175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.529186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.529488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.529498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.529784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.529794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.530123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.530424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.530435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.530716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.530726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.531010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.531020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.531324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.531335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.531542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.531553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.531864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.531873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.532047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.532348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.532360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.532671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.532682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.532984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.532995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.533357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.859 [2024-12-06 18:03:32.533368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.859 qpair failed and we were unable to recover it. 00:26:44.859 [2024-12-06 18:03:32.533649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.533659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.534014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.534025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.534203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.534213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.534539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.534548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.534893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.534903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.535186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.535197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.535488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.535499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.535787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.535798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.536080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.536090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.536467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.536778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.536788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.537097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.537114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.537425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.537435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.537743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.537754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.538037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.538048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.538358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.538368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.538652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.538663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.538950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.538962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.539246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.539257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.539553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.539564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.539915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.539926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.540221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.540231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.540562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.540572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.540953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.540964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.541283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.541295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.541471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.541482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.542313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.542339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.542670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.542681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.542984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.542994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.543195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.543206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.543527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.543537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.543884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.543894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.544218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.544230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.544544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.544553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.544836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.544846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.545152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.545162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.545450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.545461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.545800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.545811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.546131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.546141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.546456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.546466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.546762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.546772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.547056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.547066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.547369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.547380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.547670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.547680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.547965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.547977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.548183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.548194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.548385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.548396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.548709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.548720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.549031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.549042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.549347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.549358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.549637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.549647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.549973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.549983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.550266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.860 [2024-12-06 18:03:32.550276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.860 qpair failed and we were unable to recover it. 00:26:44.860 [2024-12-06 18:03:32.550634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.550644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.550945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.550956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.551268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.551280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.551527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.551537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.551879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.551890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.552196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.552207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.552488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.552498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.552792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.552802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.553157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.553168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.553536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.553547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.553867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.553877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.554164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.554174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.554496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.554507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.554862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.554872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.555174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.555185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.555471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.555481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.555791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.555800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.556105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.556115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.556392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.556403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.556715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.556725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.556907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.556916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.557221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.557231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.557543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.557553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.557826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.557836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.558136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.558468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.558478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.558798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.558808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.559099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.559112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.559324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.559333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.559516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.559526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.559850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.559859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.560166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.560177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.560395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.560405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.560710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.560719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.561003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.561014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.561324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.561335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.561626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.561636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.561916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.561925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.562237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.562247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.562531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.562540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.562862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.562873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.563185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.563195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.563498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.563508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.563791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.563801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.564127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.564138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.564474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.564484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.564788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.564798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.565089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.565102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.565400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.565410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.565691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.565701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.566033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.566043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.566345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.566355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.566663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.566673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.567003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.567013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.567332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.567342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.567629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.861 [2024-12-06 18:03:32.567638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.861 qpair failed and we were unable to recover it. 00:26:44.861 [2024-12-06 18:03:32.567939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.567949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.568254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.568264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.568557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.568566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.568853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.568865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.569150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.569161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.569515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.569525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.569865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.569875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.570167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.570178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.570482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.570492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.570682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.570691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.570961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.570971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.571272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.571282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.571587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.571597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.571878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.571888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.572215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.572226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.572536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.572545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.572831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.572841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.573125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.573136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.573456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.573466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.573757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.573767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.574053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.574064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.574377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.574388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.574668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.574678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.574973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.574983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.575149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.575160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.575485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.575495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.575664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.575674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.575980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.575990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.576226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.576236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.576509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.576519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.576823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.576835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.577027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.577037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.577402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.577412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.577753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.577763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.578152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.578162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.578454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.578464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.578746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.578756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.579043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.579053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.579279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.579289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.579592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.579601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.579894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.579904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.580071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.580082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.580393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.580403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.580692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.580702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.581039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.581049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.581338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.581668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.581678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.582002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.582012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.582319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.582329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.582504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.582514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.582773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.582783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.583091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.583106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.583483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.583493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.583829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.583839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.584145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.584475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.584485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.584787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.862 [2024-12-06 18:03:32.584796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.862 qpair failed and we were unable to recover it. 00:26:44.862 [2024-12-06 18:03:32.585075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.585086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.585382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.585393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.585554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.585565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.585906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.585916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.586088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.586098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.586446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.586457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.586748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.586758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.587054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.587063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.587454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.587465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.587764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.587775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.587949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.587961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.588236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.588246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.588534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.588543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.588868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.588878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.589177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.589188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.589480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.589489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.589671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.589681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.589967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.589976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.590172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.590182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.590489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.590499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.590787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.590797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.591156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.591166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.591468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.591478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.591851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.591861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.592173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.592183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.592476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.592486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.592775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.592784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.593077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.593087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.593291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.593303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.593659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.593670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.593985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.593995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.594274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.594285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.594576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.594587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.594890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.594901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.595187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.595197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.595510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.595519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.595847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.595858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.596177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.596187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.596353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.596363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.596713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.596723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.596933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.596942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.597268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.597279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.597572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.597583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.597925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.597935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.598213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.598223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.598554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.598564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.598885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.598895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.599090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.599104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.599392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.599402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.599732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.599742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.600026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.600036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.600226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.600236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.863 [2024-12-06 18:03:32.600510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.863 [2024-12-06 18:03:32.600520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.863 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.600843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.600853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.601154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.601164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.601467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.601477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.601812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.601822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.602007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.602018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.602291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.602302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.602628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.602637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.602964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.602974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.603231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.603241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.603568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.603579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.603787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.603796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.604104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.604114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.604471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.604482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.604860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.604870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.605174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.605184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.605495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.605507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.605831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.606168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.606179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.606370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.606379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.606704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.606713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.607067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.607077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.607376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.607386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.607719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.607730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.608009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.608018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.608327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.608337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.608638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.608647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.608930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.608940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.609213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.609223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.609526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.609536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.609844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.609854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.610158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.610472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.610482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.610824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.610834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.611136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.611147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.611491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.611500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.611816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.611825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.612110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.612120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.612447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.612457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.612740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.612750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.612954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.612965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.613252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.613263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.613601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.613610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.613943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.613955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.614289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.614299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.614590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.614601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.614800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.614811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.615111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.615122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.615448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.615458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.615731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.615740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.615911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.615921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.616185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.616196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.616380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.616391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.616702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.616712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.617020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.617030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.617333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.617343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.617621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.617631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.617813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.864 [2024-12-06 18:03:32.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.864 qpair failed and we were unable to recover it. 00:26:44.864 [2024-12-06 18:03:32.618196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.618207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.618530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.618540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.618731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.618740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.619115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.619125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.619447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.619457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.619750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.619760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.619972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.619982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.620145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.620156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.620362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.620372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.620679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.620689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.620996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.621006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.621314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.621325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.621696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.621706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.621895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.621905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.622225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.622235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.622539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.622549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.622833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.622843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.623187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.623197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.623506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.623516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.623801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.623811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.624147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.624157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.624459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.624468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.624745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.625061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.625072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.625374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.625384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.625759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.625770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.626164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.626175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.626470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.626674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.626686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.626993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.627003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.627319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.627328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.627656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.627666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.627951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.627961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.628241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.628252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.628543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.628553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.628848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.628858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.629160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.629171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.629471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.629481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.629772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.629782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.630069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.630080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.630402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.630413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.630712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.630722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.631021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.631030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.631334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.631344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.631643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.631653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.631988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.631998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.632332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.632342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.632722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.632732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.633025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.633034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.633320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.633330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.633618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.633628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.633913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.633923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.634214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.634224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.634505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.634516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.634815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.634825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.635098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.635111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.635450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.635460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.635752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.635763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.636067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.636077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.636358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.636368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.865 [2024-12-06 18:03:32.636676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.865 [2024-12-06 18:03:32.636685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.865 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.636971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.636981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.637271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.637281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.637579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.637588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.637873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.637883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.638055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.638066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.638372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.638383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.638666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.638677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.639031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.639041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.639344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.639354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.639650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.639659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.639864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.639874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.640250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.640261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.640588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.640598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.640884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.640894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.641071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.641080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.641364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.641374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.641728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.641739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.642029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.642039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.642339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.642349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.642629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.642641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.642945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.642956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.643120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.643130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.643404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.643414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.643756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.643766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.644065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.644074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.644379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.644389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.644677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.644686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.644995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.645004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.645319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.645330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.645617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.645628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.645902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.645911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.646226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.646237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.646549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.646559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.646860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.646869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.647184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.647195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.647493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.647503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:44.866 [2024-12-06 18:03:32.647796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.866 [2024-12-06 18:03:32.647806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:44.866 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.648141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.648155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.648525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.648534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.648823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.648833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.649140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.649151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.649444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.649454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.649743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.649753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.650046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.650056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.650399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.650409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.650605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.650615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.650792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.650804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.651149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.651160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.651476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.651487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.651803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.651812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.652096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.652109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.652431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.652441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.652768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.652777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.653112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.653123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.653453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.653463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.653750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.653759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.654096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.654108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.654426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.654436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.654707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.654716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.654987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.654997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.655286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.655297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.655598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.655608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.655887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.655896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.656185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.656196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.656505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.656514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.656818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.656828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.657118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.657128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.657465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.657475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.657779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.657788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.658082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.658091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.658433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.658444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.658750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.658760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.659063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.659072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.140 [2024-12-06 18:03:32.659460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.140 [2024-12-06 18:03:32.659470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.140 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.659785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.659795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.660091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.660107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.660445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.660455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.660747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.660757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.661042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.661052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.661221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.661232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.661533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.661543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.661870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.661880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.662180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.662191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.662545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.662555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.662844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.662854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.663119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.663129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.663413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.663422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.663780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.663791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.664086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.664096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.664288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.664298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.664625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.664635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.664916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.664926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.665223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.665234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.665433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.665444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.665776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.665786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.666094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.666107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.666394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.666404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.666695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.666705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.667018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.667029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.667321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.667331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.667618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.667627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.667917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.667927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.668221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.668232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.668510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.668520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.668810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.668819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.669099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.669112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.669412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.669422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.669640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.669650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.669864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.669875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.670181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.670191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.670514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.670524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.141 [2024-12-06 18:03:32.670814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.141 [2024-12-06 18:03:32.670824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.141 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.671108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.671118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.671469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.671479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.671759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.671772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.672048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.672058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.672402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.672412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.672709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.672719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.673004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.673015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.673321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.673331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.673625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.673635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.673976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.673987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.674293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.674303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.674495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.674505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.674695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.674706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.675020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.675030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.675311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.675321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.675621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.675631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.675924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.675934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.676193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.676204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.676393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.676404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.676721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.676730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.677082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.677092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.677401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.677411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.677771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.677781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.678062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.678071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.678374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.678385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.678694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.678704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.679014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.679023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.679313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.679324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.679624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.679634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.679948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.679960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.680239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.680249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.680587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.680597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.680759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.680772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.681069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.681079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.681365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.681375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.681580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.681589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.681815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.681825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.682156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.142 [2024-12-06 18:03:32.682167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.142 qpair failed and we were unable to recover it. 00:26:45.142 [2024-12-06 18:03:32.682467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.682477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.682642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.682652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.682938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.682948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.683296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.683306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.683589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.683598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.683883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.683893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.684207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.684218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.684523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.684532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.684847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.684857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.685167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.685186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.685526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.685536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.685852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.685862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.686147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.686157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.686451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.686461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.686791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.686801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.687090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.687103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.687383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.687392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.687711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.687722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.688051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.688061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.688384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.688395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.688606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.688616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.688945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.688955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.689265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.689276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.689560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.689569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.689892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.689902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.690076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.690086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.690383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.690393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.690689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.690700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.690984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.690994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.691278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.691288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.691572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.691581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.691870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.691880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.692110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.692120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.692439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.692448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.692781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.692790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.692959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.692970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.693239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.693249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.693578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.693587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.143 qpair failed and we were unable to recover it. 00:26:45.143 [2024-12-06 18:03:32.693912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.143 [2024-12-06 18:03:32.693923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.694261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.694271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.694583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.694592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.694891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.694901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.695234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.695244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.695554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.695564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.695882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.695891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.696083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.696093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.696385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.696395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.696713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.696722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.697010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.697020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.697330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.697341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.697639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.697649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.697988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.697998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.698184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.698196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.698406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.698416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.698745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.698756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.698936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.698946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.699271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.699281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.699601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.699885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.699895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.700192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.700204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.700553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.700564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.700898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.700908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.701082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.701093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.701397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.701406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.701694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.701703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.702012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.702022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.702330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.702341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.702626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.702636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.702828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.702838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.703010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.703021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.703374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.144 [2024-12-06 18:03:32.703385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.144 qpair failed and we were unable to recover it. 00:26:45.144 [2024-12-06 18:03:32.703722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.703733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.704011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.704021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.704207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.704217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.704585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.704594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.704867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.704877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.705226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.705235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.705406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.705416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.705774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.705784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.706088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.706097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.706483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.706493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.706782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.706792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.707085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.707095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.707430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.707441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.707753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.707762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.708063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.708073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.708395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.708407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.708688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.708697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.709010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.709020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.709233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.709243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.709591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.709600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.709887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.709899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.710073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.710083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.710411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.710421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.710730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.710740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.710930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.710942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.711227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.711237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.711564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.711574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.711852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.711862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.712031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.712041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.712337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.712347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.712670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.712680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.712956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.712966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.713146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.713156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.713469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.713479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.713794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.713803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.714135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.714450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.714459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.714741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.714751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.145 [2024-12-06 18:03:32.715088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.145 [2024-12-06 18:03:32.715098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.145 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.715441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.715451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.715745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.715754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.715942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.715952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.716259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.716271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.716612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.716622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.716827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.716836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.717116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.717127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.717445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.717454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.717750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.717760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.718065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.718075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.718369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.718379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.718684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.718694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.718975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.718985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.719300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.719310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.719603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.719612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.719931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.719940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.720254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.720264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.720558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.720568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.720857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.720866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.721157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.721168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.721459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.721469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.721659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.721670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.722007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.722016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.722394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.722404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.722688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.722698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.722983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.722992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.723269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.723279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.723490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.723501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.723801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.723811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.724173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.724183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.724466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.724477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.724796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.724806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.725095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.725108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.725448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.725457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.725741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.725751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.726065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.726410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.726421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.146 [2024-12-06 18:03:32.726715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.146 [2024-12-06 18:03:32.726725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.146 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.727042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.727053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.727323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.727333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.727655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.727665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.728005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.728014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.728306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.728316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.728601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.728611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.728809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.728821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.728871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.728881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.729226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.729237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.729526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.729536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.729716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.729726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.730032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.730042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.730347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.730357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.730669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.730679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.730988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.730998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.731332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.731341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.731630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.731640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.731923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.731934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.732289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.732299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.732489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.732499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.732813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.732823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.733130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.733140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.733447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.733457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.733746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.733755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.734073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.734082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.734370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.734380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.734674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.734683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.734969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.734979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.735311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.735322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.735640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.735649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.735930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.735939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.736224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.736519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.736529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.736805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.736818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.736972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.736983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.737339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.737350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.737634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.737644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.737944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.737953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.147 qpair failed and we were unable to recover it. 00:26:45.147 [2024-12-06 18:03:32.738227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.147 [2024-12-06 18:03:32.738238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.738539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.738549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.738848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.738857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.739169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.739179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.739477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.739486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.739773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.739783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.740164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.740174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.740488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.740498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.740810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.740819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.741157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.741167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.741461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.741471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.741774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.741784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.742091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.742103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.742388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.742398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.742729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.742739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.743020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.743030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.743306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.743317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.743632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.743642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.743928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.743938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.744236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.744247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.744558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.744569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.744848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.744858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.745145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.745158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.745495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.745506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.745819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.745829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.746015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.746027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.746327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.746337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.746623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.746633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.746927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.746937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.747262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.747272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.747599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.747609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.747896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.747906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.748290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.748300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.748615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.748624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.748912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.748923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.749241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.749253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.749391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.749401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.749678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.749688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.148 [2024-12-06 18:03:32.750000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.148 [2024-12-06 18:03:32.750009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.148 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.750362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.750372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.750561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.750572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.750892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.750902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.751244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.751254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.751556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.751566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.751915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.751925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.752240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.752251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.752437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.752447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.752748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.752757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.753040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.753049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.753394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.753404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.753694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.753704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.754029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.754039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.754380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.754390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.754572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.754583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.754881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.754891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.755204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.755214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.755508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.755518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.755720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.755729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.756042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.756051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.756362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.756372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.756717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.756727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.757010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.757021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.757321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.757331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.757653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.757663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.757946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.757956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.758246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.758257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.758588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.758599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.758885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.758895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.759179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.759189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.759494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.759503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.759862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.759872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.760153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.149 [2024-12-06 18:03:32.760163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.149 qpair failed and we were unable to recover it. 00:26:45.149 [2024-12-06 18:03:32.760444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.760455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.760664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.760674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.760963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.760973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.761274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.761285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.761598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.761608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.761891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.761900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.762186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.762197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.762533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.762858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.762867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.763060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.763070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.763379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.763390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.763669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.763679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.763960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.763970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.764273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.764283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.764588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.764597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.764991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.765000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.765341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.765352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.765513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.765524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.765846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.765858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.766164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.766174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.766493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.766503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.766785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.766794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.767085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.767095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.767451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.767462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.767771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.767780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.768105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.768116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.768464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.768474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.768755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.768765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.768977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.768987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.769261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.769271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.769582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.769591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.769951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.769961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.770268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.770278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.770470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.770480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.770784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.770793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.771090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.771106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.771412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.771423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.771701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.150 [2024-12-06 18:03:32.771712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.150 qpair failed and we were unable to recover it. 00:26:45.150 [2024-12-06 18:03:32.772051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.772061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.772383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.772395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.772719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.772730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.773053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.773359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.773369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.773667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.773678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.773972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.773983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.774356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.774370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.774658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.774668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.774903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.774913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.775234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.775245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.775555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.775566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.775883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.775893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.776202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.776212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.776399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.776410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.776685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.776695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.777005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.777015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.777314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.777325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.777522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.777533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.777589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.777600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.777929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.777939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.778270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.778281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.778601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.778613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.778934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.778944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.779136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.779147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.779481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.779492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.779649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.779660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.779848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.779858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.780163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.780174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.780376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.780577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.780588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.780930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.780940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.781345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.781356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.781548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.781558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.781882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.781895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.782097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.782113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.782447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.782458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.782746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.782757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.151 [2024-12-06 18:03:32.782961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.151 [2024-12-06 18:03:32.782971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.151 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.783158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.783169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.783563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.783573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.783875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.783885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.784062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.784073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.784300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.784492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.784502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.784834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.784844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.785180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.785191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.785494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.785503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.785706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.785718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.785924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.785934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.786262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.786272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.786596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.786607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.786919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.786930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.787110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.787120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.787363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.787374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.787690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.787700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.788058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.788068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.788391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.788402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.788628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.788639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.788959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.788970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.789340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.789351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.789696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.789706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.790026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.790037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.790174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.790184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.790464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.790474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.790798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.790808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.791145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.791156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.791493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.791504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.791790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.791801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.791983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.791993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.792304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.792316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.792600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.792610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.792907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.792917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.793237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.793248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.793549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.793558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.793852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.793864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.794034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.152 [2024-12-06 18:03:32.794045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.152 qpair failed and we were unable to recover it. 00:26:45.152 [2024-12-06 18:03:32.794356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.794367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.794619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.794629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.794932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.794943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.795320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.795330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.795668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.795678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.795996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.796007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.796407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.796421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.796775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.796785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.797151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.797161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.797525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.797534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.797873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.797883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.798074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.798083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.798498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.798509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.798804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.798814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.799146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.799156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.799491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.799501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.799826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.799836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.800020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.800031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.800373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.800384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.800667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.800678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.800872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.800882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.801179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.801190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.801354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.801364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.801657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.801667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.802010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.802020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.802384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.802398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.802559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.802569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.802840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.802850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.803059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.803069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.803400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.803410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.803681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.803691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.803992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.804384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.804395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.804589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.804600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.153 [2024-12-06 18:03:32.804955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.153 [2024-12-06 18:03:32.804966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.153 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.805324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.805335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.805645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.805655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.805943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.805953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.806118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.806130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.806405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.806415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.806752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.806762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.807034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.807044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.807359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.807370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.807659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.807670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.807933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.808279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.808290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.808572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.808582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.808789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.808799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.809111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.809123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.809409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.809420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.809734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.809743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.810075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.810085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.810311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.810324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.810619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.810629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.810932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.810942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.811135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.811145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.811500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.811510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.811796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.811806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.812141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.812152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.812480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.812490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.812813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.812823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.813118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.813130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.813436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.813446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.813760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.813769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.814023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.814033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.814118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.814129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.814420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.814430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.814703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.814713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.815000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.815011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.815354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.815364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.815692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.815702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.815981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.815992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.816189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.154 [2024-12-06 18:03:32.816200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.154 qpair failed and we were unable to recover it. 00:26:45.154 [2024-12-06 18:03:32.816524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.816535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.816851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.816861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.817143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.817153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.817470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.817480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.817782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.818079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.818089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.818394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.818405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.818740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.818750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.819024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.819034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.819417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.819427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.819697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.819706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.819995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.820004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.820359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.820369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.820667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.820677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.820973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.820983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.821286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.821296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.821624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.821634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.821962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.821972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.822284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.822294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.822589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.822599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.822782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.822793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.823098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.823121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.823417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.823428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.823723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.823733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.824153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.824163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.824466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.824475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.824813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.824822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.825031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.825041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.825339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.825349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.825652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.825662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.825856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.825865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.826174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.826184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.826488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.826498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.826801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.826811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.826994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.827004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.827303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.827313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.827604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.827614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.827925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.155 qpair failed and we were unable to recover it. 00:26:45.155 [2024-12-06 18:03:32.828230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.155 [2024-12-06 18:03:32.828241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.828549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.828859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.828869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.829159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.829170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.829472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.829482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.829777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.829787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.830082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.830091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.830407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.830417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.830727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.830737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.831055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.831067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.831368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.831378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.831585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.831596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.831915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.831925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.832136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.832146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.832487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.832496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.832793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.832803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.832983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.832992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.833118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.833128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.833413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.833422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.833609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.833619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.833916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.833926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.834233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.834243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.834531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.834540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.834826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.834836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.835177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.835187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.835474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.835483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.835770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.835779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.836065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.836075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.836383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.836393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.836702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.836712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.837024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.837033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.837315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.837325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.837652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.837661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.838002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.838011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.838199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.838209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.838489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.838499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.838801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.838813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.839103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.839115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.839414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.156 [2024-12-06 18:03:32.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.156 qpair failed and we were unable to recover it. 00:26:45.156 [2024-12-06 18:03:32.839744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.839754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.839950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.839960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.840263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.840273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.840507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.840516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.840779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.840789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.841053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.841062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.841366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.841376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.841690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.841700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.842039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.842324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.842334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.842622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.842632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.842934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.842944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.843224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.843234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.843524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.843533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.843845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.843854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.844168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.844178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.844471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.844481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.844769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.844779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.844969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.844978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.845272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.845282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.845477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.845487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.845800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.845809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.846098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.846111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.846416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.846426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.846779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.847068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.847078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.847379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.847390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.847672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.847682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.848076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.848085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.848363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.848373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.848676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.848686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.848968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.848977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.849322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.849332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.849609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.849619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.849920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.849929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.850232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.850242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.850528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.850538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.850814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.850824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.157 qpair failed and we were unable to recover it. 00:26:45.157 [2024-12-06 18:03:32.851122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.157 [2024-12-06 18:03:32.851132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.851417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.851427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.851709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.851718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.851992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.852003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.852175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.852186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.852506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.852516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.852800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.852810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.853143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.853153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.853517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.853527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.853813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.853823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.854019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.854029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.854221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.854231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.854587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.854597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.854882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.854891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.855182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.855193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.855491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.855501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.855832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.855841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.856119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.856129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.856429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.856439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.856724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.856733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.857025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.857035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.857331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.857341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.857633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.857643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.857959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.857969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.858297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.858308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.858592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.858601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.858885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.858894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.859164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.859177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.859460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.859470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.859757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.859767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.860072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.860081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.860374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.860385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.860676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.860686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.860993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.861003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.158 [2024-12-06 18:03:32.861314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.158 [2024-12-06 18:03:32.861324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.158 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.861625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.861914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.861923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.862285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.862295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.862592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.862601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.862889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.862899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.863289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.863300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.863610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.863620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.863940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.863950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.864136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.864148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.864480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.864490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.864787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.864797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.865084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.865094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.865384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.865394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.865691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.865701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.865999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.866009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.866313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.866324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.866607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.866617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.866896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.866906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.867114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.867124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.867408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.867420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.867615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.867627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.867974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.867984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.868175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.868185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.868418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.868428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.868760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.868769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.869055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.869065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.869374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.869385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.869691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.869700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.870005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.870014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.870318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.870329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.870645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.870655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.870969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.870979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.871346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.871356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.871636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.871645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.871953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.871962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.872249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.872259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.872545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.872554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.872923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.872932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.159 qpair failed and we were unable to recover it. 00:26:45.159 [2024-12-06 18:03:32.873245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.159 [2024-12-06 18:03:32.873256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.873569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.873579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.873889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.873899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.874205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.874215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.874513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.874522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.874697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.874708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.875080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.875089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.875372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.875382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.875684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.875696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.875995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.876005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.876295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.876306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.876634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.876643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.876959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.876968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.877324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.877334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.877529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.877539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.877841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.877851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.878138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.878149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.878339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.878350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.878628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.878637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.878955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.878965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.879352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.879362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.879642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.879652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.879937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.879946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.880232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.880242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.880528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.880537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.880824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.880834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.881112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.881122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.881409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.881419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.881739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.881748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.882035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.882045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.882352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.882362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.882646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.882656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.882985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.882995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.883334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.883344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.883531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.883542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.883868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.883878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.884264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.884275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.884558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.884568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.884855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.160 [2024-12-06 18:03:32.884865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.160 qpair failed and we were unable to recover it. 00:26:45.160 [2024-12-06 18:03:32.885045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.885056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.885258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.885269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.885579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.885589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.885780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.885789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.886098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.886112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.886419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.886429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.886712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.886722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.887055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.887065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.887351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.887362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.887669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.887679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.887997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.888006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.888353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.888363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.888659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.888669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.888997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.889007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.889320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.889331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.889611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.889621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.889909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.889918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.890255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.890264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.890545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.890554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.890993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.891003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.891275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.891285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.891584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.891593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.891867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.891877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.892180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.892190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.892562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.892572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.892801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.892810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.893128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.893138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.893483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.893493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.893770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.893779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.894077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.894087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.894292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.894302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.894628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.894638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.894992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.895002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.895280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.895290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.895491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.895501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.895741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.895751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.896061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.896071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.896361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.896373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.161 qpair failed and we were unable to recover it. 00:26:45.161 [2024-12-06 18:03:32.896657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.161 [2024-12-06 18:03:32.896666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.896961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.896970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.897307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.897317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.897598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.897608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.897963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.897973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.898278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.898288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.898577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.898586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.898906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.898916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.899226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.899237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.899519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.899529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.899705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.899715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.900024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.900033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.900316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.900326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.900660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.900669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.900959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.901276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.901286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.901582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.901592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.901932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.901941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.902135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.902145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.902457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.902466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.902677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.902687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.903001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.903011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.903346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.903356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.903668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.903677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.904019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.904029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.904337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.904347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.904636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.904648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.904958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.904968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.905255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.905266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.905562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.905572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.905865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.905874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.906059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.906069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.906355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.906365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.906688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.906697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.906998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.162 [2024-12-06 18:03:32.907008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.162 qpair failed and we were unable to recover it. 00:26:45.162 [2024-12-06 18:03:32.907379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.907389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.907688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.907698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.907999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.908171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.908182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.908534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.908543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.908852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.908862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.909164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.909174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.909467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.909476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.909821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.909831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.910031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.910040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.910214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.910225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.910487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.910497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.910815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.910825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.911109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.911119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.911334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.911344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.911660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.911670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.911964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.912263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.912561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.912570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.912901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.912911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.913182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.913192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.913485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.913495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.913829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.914161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.914171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.914526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.914536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.914817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.914827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.915122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.915133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.915419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.915429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.915595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.915606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.915806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.915816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.916126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.916136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.916455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.916466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.916767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.916777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.917072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.917082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.917423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.917433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.917731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.917741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.918042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.918052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.918331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.918342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.163 [2024-12-06 18:03:32.918635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.163 [2024-12-06 18:03:32.918645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.163 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.918963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.918973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.919258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.919268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.919578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.919588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.919872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.919882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.920210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.920220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.920503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.920513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.920817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.920827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.921021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.921032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.921236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.921247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.921630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.921640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.921801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.921812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.921985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.921995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.922264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.922275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.922550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.922560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.922733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.922743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.922790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.922801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.923141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.923151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.923464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.923474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.923793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.923803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.924110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.924120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.924440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.924452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.924725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.924735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.925012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.925022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.925191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.925201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.925488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.925498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.925781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.925791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.926075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.926084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.926256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.926266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.926436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.926713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.926723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.927020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.927030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.927333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.927343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.927517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.927527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.927840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.927850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.928158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.928168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.928497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.928506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.928675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.928685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.928952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.164 [2024-12-06 18:03:32.928962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.164 qpair failed and we were unable to recover it. 00:26:45.164 [2024-12-06 18:03:32.929251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.929261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.929535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.929544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.929851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.929861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.930192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.930202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.930494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.930504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.930782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.930791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.931073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.931083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.931360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.931371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.931676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.931686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.931860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.931872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.932152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.932163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.932470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.932479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.932750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.932760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.933089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.933106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.933396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.933406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.933580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.933590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.933911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.933921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.934201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.934211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.934398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.934408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.934729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.934739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.935096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.935109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.935407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.935417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.935736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.935746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.936098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.936113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.936387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.936397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.936668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.936678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.936979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.936988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.937272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.937283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.937573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.937583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.937875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.937884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.938175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.938185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.938471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.938481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.938842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.938852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.939129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.939139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.939443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.939453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.939801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.939810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.939993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.940004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.940315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.940325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.165 [2024-12-06 18:03:32.940622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.165 [2024-12-06 18:03:32.940631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.165 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.940921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.940930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.941265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.941275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.941595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.941604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.941882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.941892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.942207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.942217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.942494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.942504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.942802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.942811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.943118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.943128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.943417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.943426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.943713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.943722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.944018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.944027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.944329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.944339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.944639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.944932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.944942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.945250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.945261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.945549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.945558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.945893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.945903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.946196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.946207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.946528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.946537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.946845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.946855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.947200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.947210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.947487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.947497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.947801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.947811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.948095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.948108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.948471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.948481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.948761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.948771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.949048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.949057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.949389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.949399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.949726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.949735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.950062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.950071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.950365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.950375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.950669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.950678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.951010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.951019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.951306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.951316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.951508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.951518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.951908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.951918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.952241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.952251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.166 [2024-12-06 18:03:32.952575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.166 [2024-12-06 18:03:32.952585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.166 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.952795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.952806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.953117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.953127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.953319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.953329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.953656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.953665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.953857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.953867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.954181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.954192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.954511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.954521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.954689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.954700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.954996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.955006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.955279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.955290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.955582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.955592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.955938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.955948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.956234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.956244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.956559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.956569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.956924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.956933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.439 qpair failed and we were unable to recover it. 00:26:45.439 [2024-12-06 18:03:32.957222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.439 [2024-12-06 18:03:32.957231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.957545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.957555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.957836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.957846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.958202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.958213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.958552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.958562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.958737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.958747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.958926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.958937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.959253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.959264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.959532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.959541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.959854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.959864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.960166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.960176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.960508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.960518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.960701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.960715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.961024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.961034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.961326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.961336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.961644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.961654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.961958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.961968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.962271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.962281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.962561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.962571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.962899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.962909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.963236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.963247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.963540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.963549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.963846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.963855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.964157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.964167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.964470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.964480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.964796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.964806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.965089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.965099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.965464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.965473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.965761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.965771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.966053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.966062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.966239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.966251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.966523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.966532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.966814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.966824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.967114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.967124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.967294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.967305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.967590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.967600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.967889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.967898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.440 qpair failed and we were unable to recover it. 00:26:45.440 [2024-12-06 18:03:32.968179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.440 [2024-12-06 18:03:32.968189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.968501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.968511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.968868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.968884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.969173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.969183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.969379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.969389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.969712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.969722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.969951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.969960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.970267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.970277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.970597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.970606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.970888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.970897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.971212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.971222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.971557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.971567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.971848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.971858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.972144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.972154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.972455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.972466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.972677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.972687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.973016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.973027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.973347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.973358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.973550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.973560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.973883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.973892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.974206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.974216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.974500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.974510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.974869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.974879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.975163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.975173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.975482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.975492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.975783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.975792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.976082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.976092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.976411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.976421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.976697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.976706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.977023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.977033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.977386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.977397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.977713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.977723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.978025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.978035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.978334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.978344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.978624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.978634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.978804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.978814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.978962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.978972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.979156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.979167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.441 qpair failed and we were unable to recover it. 00:26:45.441 [2024-12-06 18:03:32.979453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.441 [2024-12-06 18:03:32.979463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.979774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.979783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.979951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.979961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.980233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.980244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.980550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.980560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.980861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.981164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.981174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.981538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.981548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.981872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.981881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.982171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.982181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.982503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.982831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.982841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.983131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.983141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.983435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.983445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.983736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.983746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.984031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.984041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.984363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.984374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.984725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.984735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.984920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.984930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.985226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.985236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.985401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.985763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.985773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.986095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.986109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.986447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.986457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.986732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.986741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.987032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.987041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.987250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.987261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.987585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.987595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.987893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.987903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.988193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.988203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.988416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.988426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.988742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.988752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.989045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.989057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.989376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.989386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.989704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.989713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.990003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.990012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.990298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.990308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.990478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.990490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.442 qpair failed and we were unable to recover it. 00:26:45.442 [2024-12-06 18:03:32.990668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.442 [2024-12-06 18:03:32.990677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.990899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.990909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.991242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.991252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.991618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.991627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.991952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.991962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.992262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.992272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.992590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.992599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.992893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.992902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.993247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.993258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.993422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.993432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.993765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.993775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.994103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.994113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.994411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.994421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.994712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.994721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.995031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.995040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.995363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.995373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.995659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.995668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.995958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.995968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.996285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.996295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.996643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.996652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.996932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.996942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.997278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.997291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.997594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.997604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.997909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.997919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.998190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.998201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.998485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.998495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.998772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.998781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.999090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.999108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.999476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.999486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.999652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.999661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:32.999919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:32.999929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.000188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.000199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.000513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.000523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.000824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.000833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.001135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.001146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.001437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.001447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.001730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.001740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.002020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.002030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.443 [2024-12-06 18:03:33.002321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.443 [2024-12-06 18:03:33.002332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.443 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.002625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.002635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.002923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.002933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.003236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.003246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.003452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.003462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.003780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.003789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.004084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.004093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.004379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.004389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.004709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.004718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.005044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.005053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.005426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.005439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.005734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.005743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.006027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.006037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.006450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.006460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.006787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.006796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.007075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.007084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.007434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.007445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.007632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.007642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.007948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.007958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.008269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.008279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.008588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.008598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.008899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.008909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.009225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.009235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.009409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.009419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.009693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.009702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.009984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.010181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.010479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.010488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.010818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.010828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.011124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.011134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.011455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.011465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.011755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.011765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.011932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.444 [2024-12-06 18:03:33.011943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.444 qpair failed and we were unable to recover it. 00:26:45.444 [2024-12-06 18:03:33.012209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.012219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.012556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.012565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.012863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.012873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.013168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.013178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.013464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.013473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.013768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.013777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.014064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.014074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.014416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.014426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.014627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.014955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.014964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.015146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.015157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.015486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.015496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.015786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.015796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.016121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.016131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.016434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.016444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.016738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.016748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.017098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.017113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.017496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.017506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.017858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.017868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.018157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.018489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.018500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.018814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.018825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.019106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.019117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.019273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.019284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.019638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.019649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.019989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.019999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.020190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.020200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.020613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.020623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.020911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.020920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.021191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.021202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.021521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.021531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.021829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.021839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.022131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.022141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.022451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.022462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.022750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.022760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.023121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.023131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.023421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.023431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.023738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.445 [2024-12-06 18:03:33.023749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.445 qpair failed and we were unable to recover it. 00:26:45.445 [2024-12-06 18:03:33.024040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.024051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.024417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.024428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.024731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.024742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.025035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.025045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.025396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.025406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.025690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.025700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.026045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.026055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.026354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.026368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.026652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.026662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.026963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.026973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.027265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.027276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.027559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.027570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.027855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.027866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.028169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.028180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.028522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.028532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.028763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.028774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.029095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.029116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.029471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.029481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.029761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.029771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.030104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.030116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.030412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.030423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.030602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.030613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.030916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.030927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.031128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.031140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.031460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.031471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.031755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.031765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.032106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.032116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.032487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.032497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.032848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.032859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.033155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.033166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.033508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.033519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.033797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.033807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.034090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.034105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.034404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.034415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.034716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.034729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.035017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.035028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.035271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.035282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.035402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.446 [2024-12-06 18:03:33.035412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.446 qpair failed and we were unable to recover it. 00:26:45.446 [2024-12-06 18:03:33.035746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.035757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.036040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.036050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.036417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.036428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.036710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.036721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.037067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.037077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.037381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.037391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.037673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.037683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.037957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.037968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.038154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.038164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.038474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.038484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.038696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.038707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.038886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.038897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.039261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.039272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.039562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.039572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.039752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.039762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.039984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.039995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.040293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.040304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.040588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.040598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.040880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.040890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.041174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.041185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.041481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.041491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.041786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.041797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.042081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.042091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.042405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.042417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.042715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.042726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.043010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.043020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.043321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.043333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.043597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.043608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.043890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.043900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.044177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.044188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.044504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.044514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.044803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.044813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.044977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.044988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.045168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.045179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.045506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.045518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.045897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.046157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.046167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.046475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.046485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.447 qpair failed and we were unable to recover it. 00:26:45.447 [2024-12-06 18:03:33.046805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.447 [2024-12-06 18:03:33.046815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.047145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.047157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.047324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.047334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.047670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.047681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.048003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.048014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.048353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.048364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.048628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.048639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.048983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.048994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.049272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.049283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.049614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.049624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.049958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.049968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.050257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.050267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.050476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.050487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.050805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.050815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.051125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.051136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.051364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.051374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.051680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.051690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.051962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.051971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.052258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.052269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.052442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.052452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.052717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.052727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.053011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.053021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.053406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.053416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.053692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.053702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.053998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.054008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.054304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.054315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.054604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.054617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.054897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.054906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.055189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.055200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.055387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.055397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.055744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.055754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.056054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.056064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.448 qpair failed and we were unable to recover it. 00:26:45.448 [2024-12-06 18:03:33.056404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.448 [2024-12-06 18:03:33.056415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.056723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.056733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.057012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.057022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.057171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.057181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.057499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.057510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.057809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.057820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.058139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.058149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.058538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.058547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.058829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.058840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.059130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.059443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.059452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.059642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.059653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.059967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.059977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.060258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.060269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.060602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.060613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.060933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.060943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.061248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.061258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.061566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.061577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.061866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.061877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.062173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.062184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.062470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.062481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.062768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.062780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.063062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.063071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.063393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.063688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.063699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.063997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.064009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.064329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.064340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.064625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.064635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.064953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.064963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.065261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.065270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.065564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.065574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.065871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.065881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.066168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.066178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.066478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.066488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.066773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.449 [2024-12-06 18:03:33.066783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.449 qpair failed and we were unable to recover it. 00:26:45.449 [2024-12-06 18:03:33.067091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.067111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.067448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.067458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.067753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.067763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.068020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.068030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.068365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.068375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.068684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.068693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.068981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.068990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.069279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.069289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.069602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.069612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.069801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.069811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.070119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.070129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.070436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.070446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.070739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.070749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.071035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.071047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.071324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.071334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.071644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.071654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.071950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.071960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.072242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.072252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.072569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.072579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.072908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.072917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.073226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.073236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.073547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.073557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.073880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.073890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.074215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.074225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.074538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.074548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.074836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.074846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.075138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.075148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.075432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.075442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.075728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.075738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.076016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.076026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.076336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.076347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.076628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.076638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.076899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.076909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.077237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.077248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.077598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.450 [2024-12-06 18:03:33.077770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.450 [2024-12-06 18:03:33.077781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.450 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.078058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.078067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.078374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.078384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.078682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.078693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.078996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.079006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.079315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.079326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.079629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.079639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.079946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.079955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.080226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.080237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.080514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.080524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.080827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.080836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.081121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.081131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.081471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.081481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.081767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.081777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.082119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.082129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.082482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.082492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.082792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.082802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.083090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.083104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.083270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.083280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.083477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.083487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.083817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.083827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.084114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.084125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.084437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.084447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.084741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.084751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.085090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.085104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.085410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.085419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.085713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.085723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.086006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.086015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.086376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.086386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.086672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.086681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.086962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.086972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.087258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.087268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.087557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.087567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.087849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.087859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.088167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.451 [2024-12-06 18:03:33.088177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.451 qpair failed and we were unable to recover it. 00:26:45.451 [2024-12-06 18:03:33.088468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.088477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.088761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.088770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.089056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.089066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.089346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.089357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.089640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.089650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.089959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.089969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.090257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.090268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.090609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.090619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.090918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.090928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.091097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.091112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.091420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.091430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.091744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.091758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.092096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.092113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.092325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.092334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.092633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.092643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.092961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.092970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.093269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.093280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.093463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.093473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.093743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.093753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.094031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.094041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.094326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.094337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.094705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.094715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.095008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.095018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.095361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.095371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.095691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.095700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.095994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.096307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.096589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.096599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.096929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.096939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.097224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.097234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.097402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.097413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.097711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.097721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.098065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.098075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.098355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.098366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.098679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.098688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.098977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.098986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.099272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.452 [2024-12-06 18:03:33.099282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.452 qpair failed and we were unable to recover it. 00:26:45.452 [2024-12-06 18:03:33.099582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.099592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.099910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.099921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.100204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.100215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.100400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.100410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.100717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.100727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.101045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.101054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.101336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.101347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.101637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.101646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.101944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.102244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.102254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.102537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.102546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.102837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.102847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.103035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.103045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.103355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.103658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.103668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.103968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.103977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.104268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.104279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.104469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.104478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.104768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.104778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.105086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.105413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.105423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.105738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.105748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.106065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.106075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.106423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.106434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.106776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.106785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.107085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.107095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.107423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.107433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.107729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.107738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.108054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.108064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.108411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.108422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.453 [2024-12-06 18:03:33.108769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.453 [2024-12-06 18:03:33.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.453 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.109076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.109085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.109370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.109380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.109668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.109677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.109962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.109972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.110351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.110361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.110645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.110654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.110937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.110947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.111294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.111305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.111611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.111621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.111932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.111942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.112224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.112233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.112554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.112564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.112904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.112914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.113197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.113207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.113374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.113383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.113653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.113663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.114006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.114015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.114210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.114221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.114406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.114415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.114790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.114800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.115091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.115104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.115392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.115401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.115713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.115723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.116039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.116049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.116326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.116624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.116633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.116839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.116849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.117155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.117165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.117464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.117473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.117756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.117766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.118135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.118146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.118478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.118488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.118796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.118805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.118973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.118983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.119268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.119278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.119567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.119747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.119757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.120087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.454 [2024-12-06 18:03:33.120097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.454 qpair failed and we were unable to recover it. 00:26:45.454 [2024-12-06 18:03:33.120384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.120396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.120711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.120721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.121031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.121041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.121350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.121360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.121713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.121723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.122020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.122030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.122314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.122324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.122630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.122805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.122815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.123029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.123039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.123361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.123370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.123655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.123664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.123985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.123995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.124184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.124194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.124513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.124523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.124838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.124847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.125069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.125079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.125399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.125410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.125696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.125706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.126013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.126023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.126346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.126356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.126639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.126649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.126933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.126943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.127265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.127275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.127555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.127565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.127884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.127893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.128212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.128223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.128507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.128519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.128826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.128835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.129051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.129061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.129377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.129387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.129557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.129567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.129874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.129883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.130195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.130205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.130498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.130508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.130817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.130827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.131111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.131122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.131467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.455 [2024-12-06 18:03:33.131476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.455 qpair failed and we were unable to recover it. 00:26:45.455 [2024-12-06 18:03:33.131769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.131778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.131951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.131961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.132228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.132239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.132420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.132430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.132614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.132625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.132844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.132854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.133152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.133162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.133443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.133453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.133725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.133735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.134014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.134024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.134286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.134296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.134600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.134610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.134921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.134931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.135228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.135239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.135417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.135427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.135736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.135746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.136041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.136053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.136343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.136354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.136672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.136681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.137025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.137034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.137349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.137360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.137557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.137567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.137900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.137910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.138192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.138202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.138400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.138410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.138729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.138739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.139030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.139040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.139314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.139325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.139613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.139623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.139903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.139912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.140223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.140234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.140548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.140558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.140753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.140763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.141102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.141113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.141438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.141447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.141641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.141651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.141966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.141976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.142271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.142281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.456 [2024-12-06 18:03:33.142576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.456 [2024-12-06 18:03:33.142586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.456 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.142787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.142796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.143091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.143104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.143302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.143312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.143648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.143657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.143941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.143951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.144250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.144261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.144545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.144555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.144888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.144898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.145217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.145227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.145578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.145587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.145876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.145885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.146166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.146177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.146460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.146469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.146751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.146761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.147041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.147051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.147328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.147338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.147612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.147621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.147938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.147948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.148246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.148256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.148589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.148599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.148885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.148895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.149183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.149194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.149533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.149543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.149890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.149900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.150188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.150199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.150496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.150506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.150677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.150686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.151030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.151039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.151381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.151391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.151711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.151721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.152004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.152013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.152304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.152314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.152680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.152690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.153009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.153019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.153208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.153219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.153540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.153549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.153839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.153848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.154149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.457 [2024-12-06 18:03:33.154159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.457 qpair failed and we were unable to recover it. 00:26:45.457 [2024-12-06 18:03:33.154453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.154463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.154750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.154760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.155105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.155115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.155403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.155413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.155695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.155705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.156019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.156316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.156326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.156620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.156632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.156915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.156925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.157114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.157125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.157424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.157434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.157750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.157759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.158051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.158060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.158350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.158360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.158682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.158692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.158983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.158993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.159278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.159288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.159581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.159591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.159921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.159930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.160213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.160224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.160550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.160560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.160846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.160855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.161169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.161180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.161460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.161470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.161787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.161796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.162080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.162090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.162400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.162411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.162711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.162720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.163012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.163022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.163293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.163303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.163591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.163600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.163892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.163902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.164214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.458 [2024-12-06 18:03:33.164224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.458 qpair failed and we were unable to recover it. 00:26:45.458 [2024-12-06 18:03:33.164564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.164574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.164859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.164871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.165158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.165168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.165486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.165496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.165850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.165859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.166141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.166151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.166323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.166332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.166675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.166684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.166984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.166993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.167282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.167292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.167580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.167590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.167931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.167941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.168112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.168123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.168402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.168411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.168622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.168632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.168944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.168954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.169268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.169279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.169588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.169597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.169893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.169902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.170187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.170198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.170539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.170548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.170833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.170842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.171190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.171200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.171554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.171563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.171854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.171864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.172050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.172061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.172358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.172369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.172668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.172677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.172963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.172973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.173143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.173153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.173470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.173480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.173768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.173777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.174061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.174070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.174409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.174419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.174733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.174743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.175092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.175106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.175397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.175406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.175687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.175696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.459 [2024-12-06 18:03:33.176060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.459 [2024-12-06 18:03:33.176070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.459 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.176382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.176392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.176681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.176691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.176975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.176985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.177315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.177326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.177626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.177636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.177945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.177954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.178238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.178248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.178595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.178605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.178896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.178905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.179203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.179213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.179531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.179541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.179823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.179833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.180178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.180188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.180473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.180483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.180831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.180841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.181148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.181158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.181499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.181508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.181794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.181804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.182009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.182018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.182316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.182326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.182649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.182658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.182994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.183003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.183180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.183190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.183465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.183475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.183715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.183725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.184023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.184033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.184355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.184365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.184663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.184673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.184956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.184966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.185256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.185266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.185552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.185564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.185742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.185752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.186049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.186059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.186389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.186400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.186722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.186732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.187017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.187027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.187321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.187331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.460 [2024-12-06 18:03:33.187631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.460 [2024-12-06 18:03:33.187641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.460 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.187983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.187992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.188197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.188207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.188628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.188641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.188841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.188854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.189044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.189054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.189383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.189394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.189696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.189706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.190008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.190018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.190306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.190316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.190602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.190612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.190903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.190913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.191271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.191281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.191572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.191582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.191866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.191876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.192157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.192168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.192474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.192484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.192788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.192797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.193087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.193097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.193402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.193412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.193769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.193781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.194062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.194071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.194394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.194405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.194612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.194622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.194914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.194924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.195226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.195236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.195527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.195537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.195827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.195836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.196133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.196143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.196456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.196466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.196753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.196762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.197087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.197097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.197399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.197409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.197749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.197759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.198050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.198060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.198353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.198363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.198643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.198653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.198936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.198946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.199229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.461 [2024-12-06 18:03:33.199239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.461 qpair failed and we were unable to recover it. 00:26:45.461 [2024-12-06 18:03:33.199478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.199488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.199799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.199808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.200104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.200115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.200471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.200481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.200755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.200764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.201042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.201052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.201397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.201407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.201694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.201704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.202032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.202043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.202322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.202333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.202644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.202654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.202961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.202971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.203315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.203325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.203613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.203623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.203955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.203965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.204276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.204286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.204581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.204591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.204879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.204889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.205203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.205213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.205516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.205526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.205729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.205739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.206049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.206058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.206343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.206354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.206639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.206649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.206933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.206943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.207231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.207241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.207517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.207527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.207839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.207848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.208157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.208167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.208466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.208475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.208785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.208795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.209156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.209166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.209468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.209478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.209834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.209843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.210188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.210198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.210467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.462 [2024-12-06 18:03:33.210477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.462 qpair failed and we were unable to recover it. 00:26:45.462 [2024-12-06 18:03:33.210785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.210795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.211090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.211104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.211412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.211422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.211699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.211709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.212016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.212026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.212315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.212325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.212487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.212497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.212783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.212793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.213108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.213118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.213488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.213498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.213778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.213787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.214071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.214081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.214249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.214260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.214560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.214571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.214873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.214883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.215169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.215180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.215485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.215494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.215713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.215722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.216068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.216077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.216433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.216443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.216749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.216759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.217046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.217056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.217399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.217410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.217605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.217615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.217949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.218235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.218245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.218408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.218422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.218718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.218728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.219015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.219025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.219347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.219358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.219704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.219713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.220007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.220016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.220301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.220311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.220600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.220610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.220915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.220925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.463 [2024-12-06 18:03:33.221209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.463 [2024-12-06 18:03:33.221219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.463 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.221529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.221539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.221904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.221914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.222227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.222238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.222552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.222561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.222868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.222880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.223162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.223173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.223496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.223505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.223824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.223834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.224112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.224122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.224317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.224327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.224611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.224620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.224911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.224921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.225234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.225244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.225445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.225455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.225744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.225754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.226049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.226059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.226346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.226357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.226638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.226821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.226831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.227034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.227044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.227368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.227379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.227655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.227983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.227992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.228350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.228360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.228752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.228762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.229063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.229072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.229360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.229370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.229683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.229692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.229979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.229989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.230295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.230306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.230615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.230625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.230931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.230944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.231109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.231121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.231395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.231689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.231698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.231986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.231996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.232354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.232364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.464 [2024-12-06 18:03:33.232688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.464 [2024-12-06 18:03:33.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.464 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.232982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.232992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.233267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.233277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.233571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.233581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.233859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.233869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.234177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.234187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.234498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.234508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.234791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.234801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.235107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.235117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.235461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.235471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.235801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.235810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.236144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.236154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.236459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.236767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.236776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.237072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.237082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.237375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.237386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.237614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.237624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.237939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.237949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.238247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.238257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.238538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.238547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.238893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.238903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.239185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.239196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.239495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.239505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.239788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.239797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.240088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.240098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.240391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.240402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.240688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.240698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.240984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.240993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.241273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.241283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.241566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.241576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.241858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.241868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.242161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.242171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.242453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.242463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.242746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.242756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.243071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.243081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.243410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.243421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.243709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.243719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.244089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.244099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.244272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.465 [2024-12-06 18:03:33.244281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.465 qpair failed and we were unable to recover it. 00:26:45.465 [2024-12-06 18:03:33.244600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.244610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.244912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.245269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.245279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.245621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.245631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.245953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.245963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.246244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.246255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.246587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.246596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.246882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.246891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.247262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.247272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.247576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.247586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.247775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.247785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.248113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.248123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.248416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.248426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.248739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.248749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.249039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.249048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.249336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.249347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.249701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.249711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.249996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.250006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.250206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.250217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.250486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.250496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.250701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.250711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.251012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.251022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.251193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.251204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.251493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.251505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.251784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.251793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.466 [2024-12-06 18:03:33.252091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.466 [2024-12-06 18:03:33.252105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.466 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.252439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.252450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.252622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.252632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.252949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.252959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.253251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.253261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.253550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.253560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.253896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.253906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.254191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.254201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.254486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.254496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.254851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.254860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.255153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.255163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.740 [2024-12-06 18:03:33.255498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.740 [2024-12-06 18:03:33.255508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.740 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.255890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.255900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.256106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.256116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.256466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.256476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.256801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.256811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.257107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.257118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.257300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.257309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.257579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.257588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.257866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.257876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.258173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.258184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.258473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.258483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.258768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.258778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.259108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.259118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.259430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.259440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.259710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.259722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.259999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.260009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.260087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.260096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.260378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.260387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.260672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.260682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.260991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.261001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.261202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.261212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.261412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.261422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.261726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.261736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.262038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.262048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.262214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.262225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.262589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.262599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.262884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.262894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.263179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.263189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.263491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.263501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.263788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.263798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.264085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.264094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.264436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.264447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.264744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.264753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.741 [2024-12-06 18:03:33.265036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.741 [2024-12-06 18:03:33.265046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.741 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.265324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.265334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.265615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.265625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.265965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.265974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.266299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.266309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.266637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.266647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.266960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.266970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.267279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.267600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.267612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.267920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.267929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.268185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.268195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.268524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.268534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.268862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.268872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.269194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.269204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.269541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.269550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.269825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.269835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.270141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.270152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.270439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.270448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.270787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.270797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.271083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.271093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.271377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.271387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.271671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.271680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.271970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.271981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.272267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.272278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.272474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.272485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.272800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.272810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.273121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.273132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.273444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.273453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.273761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.273770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.274066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.274076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.274376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.274387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.274680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.274690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.274969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.274978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.742 [2024-12-06 18:03:33.275260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.742 [2024-12-06 18:03:33.275272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.742 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.275555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.275565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.275850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.275861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.276210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.276221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.276516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.276526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.276826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.277167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.277178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.277535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.277545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.277834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.277844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.278130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.278141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.278313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.278647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.278657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.278978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.278988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.279261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.279273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.279607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.279617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.279920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.279931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.280139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.280149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.280468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.280478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.280767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.280777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.281139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.281149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.281468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.281478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.281771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.281782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.282115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.282127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.282462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.282472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.282798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.282809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.283087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.283098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.283481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.283492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.283772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.283783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.284068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.284078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.284364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.284374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.284668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.284679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.284963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.284973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.285267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.285279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.285539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.743 [2024-12-06 18:03:33.285549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.743 qpair failed and we were unable to recover it. 00:26:45.743 [2024-12-06 18:03:33.285893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.285903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.286233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.286244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.286623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.286633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.286902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.286913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.287213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.287224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.287526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.287536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.287815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.287825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.288168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.288453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.288643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.288655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.288958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.288969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.289253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.289264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.289581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.289590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.289933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.289944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.290232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.290243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.290534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.290545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.290828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.290838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.291120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.291131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.291488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.291499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.291805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.291816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.292148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.292499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.292509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.292818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.292828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.293158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.293168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.293469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.293479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.293668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.293678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.293960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.293970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.294252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.294263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.294597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.294608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.294903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.294913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.295081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.295092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.295435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.295727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.295737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.744 [2024-12-06 18:03:33.296040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.744 [2024-12-06 18:03:33.296050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.744 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.296360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.296370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.296647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.296657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.296945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.296958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.297280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.297291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.297572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.297582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.297861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.298153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.298164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.298369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.298379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.298732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.298742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.299048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.299058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.299369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.299380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.299661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.299672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.299863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.299873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.300198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.300208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.300496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.300506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.300797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.300807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.301111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.301122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.301455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.301465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.301757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.301767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.302060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.302070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.302426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.302438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.302736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.302746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.303041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.303050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.303371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.303382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.303665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.303675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.304024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.304035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.304367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.304378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.745 [2024-12-06 18:03:33.304732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.745 [2024-12-06 18:03:33.304743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.745 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.304919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.304930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.305189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.305200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.305481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.305491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.305822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.305833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.306136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.306467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.306478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.306774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.306784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.306989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.306999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.307415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.307426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.307727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.307737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.308052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.308063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.308371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.308382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.308659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.308670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.308839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.308850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.309163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.309179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.309460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.309470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.309772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.309782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.310076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.310086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.310387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.310398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.310687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.310698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.311019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.311029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.311303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.311314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.311500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.311510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.311875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.311885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.312205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.312215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.312533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.312543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.312828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.312838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.313176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.313187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.313465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.313475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.313663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.313675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.313964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.313973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.314282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.314293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.314602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.314613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.746 [2024-12-06 18:03:33.314901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.746 [2024-12-06 18:03:33.314911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.746 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.315214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.315225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.315526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.315536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.315878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.315888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.316182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.316193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.316481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.316490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.316772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.316782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.317076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.317086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.317390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.317400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.317597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.317609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.317915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.317925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.318221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.318231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.318517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.318526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.318818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.318827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.319181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.319191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.319356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.319365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.319687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.319696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.319867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.319876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.320147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.320157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.320447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.320456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.320737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.320747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.321088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.321098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.321289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.321299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.321639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.321649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.321824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.321834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.322157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.322167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.322474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.322484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.322691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.322701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.322992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.323001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.323201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.323211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.323579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.323588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.323929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.323939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.324213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.747 [2024-12-06 18:03:33.324224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.747 qpair failed and we were unable to recover it. 00:26:45.747 [2024-12-06 18:03:33.324534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.324544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.324751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.324761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.325074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.325084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.325392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.325404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.325740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.325749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.325915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.325926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.326203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.326213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.326408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.326417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.326755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.327056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.327066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.327361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.327371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.327690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.327700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.328021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.328031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.328385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.328395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.328677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.328686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.328974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.328984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.329266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.329276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.329580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.329590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.329890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.329900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.330177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.330187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.330476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.330485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.330690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.330700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.331008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.331018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.331321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.331331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.331452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.331462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.331817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.331826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.332111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.332121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.332423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.332432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.332729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.332739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.333050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.333060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.333379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.333391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.333716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.333726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.333893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.333904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.334193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.748 qpair failed and we were unable to recover it. 00:26:45.748 [2024-12-06 18:03:33.334546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.748 [2024-12-06 18:03:33.334556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.334908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.334917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.335206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.335216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.335505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.335515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.335715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.335725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.336047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.336056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.336372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.336382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.336676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.336686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.336972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.336982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.337273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.337284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.337597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.337607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.337886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.337896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.338265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.338275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.338567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.338577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.338857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.338867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.339152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.339162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.339351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.339360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.339648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.339658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.339968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.339978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.340287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.340298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.340601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.340611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.340784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.340794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.341081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.341091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.341402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.341412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.341696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.341706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.342051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.342060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.342427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.342437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.342661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.342670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.342870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.342880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.343219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.343230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.343553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.343563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.343892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.343901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.344175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.344185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.344463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.344473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.344806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.344815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.345169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.345179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.749 qpair failed and we were unable to recover it. 00:26:45.749 [2024-12-06 18:03:33.345337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.749 [2024-12-06 18:03:33.345346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.345649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.345661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.345945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.345955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.346266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.346276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.346582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.346591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.346892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.346902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.347221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.347231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.347576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.347585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.347882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.347892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.348181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.348191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.348517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.348527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.348810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.348820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.349133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.349144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.349341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.349350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.349661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.349670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.349956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.349966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.350260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.350270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.350561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.350571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.350886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.350896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.351178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.351189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.351471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.351480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.351775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.351784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.352098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.352112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.352393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.352403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.352686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.352696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.352981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.353277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.353287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.353567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.353576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.353860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.353872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.354148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.354158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.354516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.354526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.354839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.354848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.355080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.355090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.750 [2024-12-06 18:03:33.355391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.750 [2024-12-06 18:03:33.355401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.750 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.355724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.355734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.356051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.356061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.356357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.356368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.356712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.356722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.357033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.357043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.357382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.357392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.357688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.357697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.357986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.357995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.358281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.358291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.358588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.358598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.358943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.358953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.359281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.359291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.359575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.359585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.359880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.359890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.360113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.360431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.360441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.360719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.360729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.361019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.361029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.361341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.361352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.361657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.361667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.361973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.362262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.362275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.362563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.362572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.362857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.362866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.363153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.363164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.363449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.363459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.751 [2024-12-06 18:03:33.363803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.751 [2024-12-06 18:03:33.363813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.751 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.364104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.364114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.364416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.364426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.364772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.364781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.365068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.365077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.365363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.365374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.365713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.365723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.365891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.365900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.366098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.366111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.366416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.366426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.366713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.366723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.367038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.367047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.367360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.367370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.367663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.367673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.367975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.367985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.368310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.368320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.368674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.368684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.368956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.368966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.369257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.369268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.369429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.369439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.369707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.369716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.370006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.370015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.370304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.370314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.370607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.370617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.370912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.370922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.371218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.371228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.371523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.371533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.371877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.371887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.372212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.372222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.372515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.372524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.372834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.372844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.373130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.373141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.373447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.373457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.373755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.752 [2024-12-06 18:03:33.373765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.752 qpair failed and we were unable to recover it. 00:26:45.752 [2024-12-06 18:03:33.374067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.374077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.374394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.374404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.374698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.374981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.374991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.375189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.375199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.375526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.375536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.375817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.375827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.376137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.376147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.376421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.376431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.376706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.376715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.377041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.377051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.377367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.377378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.377681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.377691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.378033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.378043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.378321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.378332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.378526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.378536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.378857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.378867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.379163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.379173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.379471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.379480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.379775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.379785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.380093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.380113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.380445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.380456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.380758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.380768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.380821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.380831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.381119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.381129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.381404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.381415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.381715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.381725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.382006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.382016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.382293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.382303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.382643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.382654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.382934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.382944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.383106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.383117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.383464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.383761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.753 [2024-12-06 18:03:33.383771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.753 qpair failed and we were unable to recover it. 00:26:45.753 [2024-12-06 18:03:33.384058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.384068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.384363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.384373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.384660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.384669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.385029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.385039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.385338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.385348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.385629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.385638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.385916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.385926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.386248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.386258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.386569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.386579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.386869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.386878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.387192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.387203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.387502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.387511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.387843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.387852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.388142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.388152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.388467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.388476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.388846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.388856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.389138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.389149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.389482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.389492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.389719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.389729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.390051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.390060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.390373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.390383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.390697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.390707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.390881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.390892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.391220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.391231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.391519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.391530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.391837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.391846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.392043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.392053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.392368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.392662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.392672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.392857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.392867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.393195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.393205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.393529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.393539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.393871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.393881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.754 [2024-12-06 18:03:33.394071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.754 [2024-12-06 18:03:33.394080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.754 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.394382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.394393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.394696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.394706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.395018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.395029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.395340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.395350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.395689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.395699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.395993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.396002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.396296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.396306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.396509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.396832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.396842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.397126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.397137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.397443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.397453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.397736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.397745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.398035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.398045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.398211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.398223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.398541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.398550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.398835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.398848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.399044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.399054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.399367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.399377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.399695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.399705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.400014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.400024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.400310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.400320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.400667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.400983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.400992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.401304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.401315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.401626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.401636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.401939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.401949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.402273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.402283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.402479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.402488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.402811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.402820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.403120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.403131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.403424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.403793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.403802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.404094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.404110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.755 [2024-12-06 18:03:33.404456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.755 [2024-12-06 18:03:33.404466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.755 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.404671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.404680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.404965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.404975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.405299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.405310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.405521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.405530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.405833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.405842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.406126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.406136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.406423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.406432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.406720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.406729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.406895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.406906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.407186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.407196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.407512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.407521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.407819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.407828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.408166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.408176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.408478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.408487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.408770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.408779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.409081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.409090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.409427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.409437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.409721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.409730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.410037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.410047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.410343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.410353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.410692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.410702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.410994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.411003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.411299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.411311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.411616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.411626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.411908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.411919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.412219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.412229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.412405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.412415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.412687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.412697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.756 [2024-12-06 18:03:33.412883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.756 [2024-12-06 18:03:33.412893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.756 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.413245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.413255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.413439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.413449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.413769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.413780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.414085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.414095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.414425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.414435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.414651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.414660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.414984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.414994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.415321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.415331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.415622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.415632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.415929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.415939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.416250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.416260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.416553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.416562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.416863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.416873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.417206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.417216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.417523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.417533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.417821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.417831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.418066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.418076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.418380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.418665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.418674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.418962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.418972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.419256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.419268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.419585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.419595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.419881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.419891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.420176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.420187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.420398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.420408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.420589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.420599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.420865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.420875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.421198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.421208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.421531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.421541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.421720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.421730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.422008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.422018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.422309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.422319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.422635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.757 [2024-12-06 18:03:33.422645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.757 qpair failed and we were unable to recover it. 00:26:45.757 [2024-12-06 18:03:33.422924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.422933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.423235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.423246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.423537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.423547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.423713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.423723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.424055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.424341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.424352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.424647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.424658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.424971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.424980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.425324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.425334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.425649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.425659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.425949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.425959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.426267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.426277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.426592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.426602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.426777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.426786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.427116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.427129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.427392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.427401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.427708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.427718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.428006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.428016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.428398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.428408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.428708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.428718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.429060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.429069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.429369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.429379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.429693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.429703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.430006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.430016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.430327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.430338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.430631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.430641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.430921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.430931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.431249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.431259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.431577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.431587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.431879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.432169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.432179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.432419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.432429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.432747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.432757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.433063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.758 [2024-12-06 18:03:33.433073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.758 qpair failed and we were unable to recover it. 00:26:45.758 [2024-12-06 18:03:33.433406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.433416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.433731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.433740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.434062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.434072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.434395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.434405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.434731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.434741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.434905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.434915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.435145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.435155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.435486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.435775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.435786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.436083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.436093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.436402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.436412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.436743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.436753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.437041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.437051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.437412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.437423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.437686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.437695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.438034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.438044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.438349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.438643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.438652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.438932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.438942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.439271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.439281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.439612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.439622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.439931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.439942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.440264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.440274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.440587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.440596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.440881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.440891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.441067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.441076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.441405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.441712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.441723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.442043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.442053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.442339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.442349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.442627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.442637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.442936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.442946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.443236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.443246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.443588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.443598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.759 [2024-12-06 18:03:33.443882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.759 [2024-12-06 18:03:33.443891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.759 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.444233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.444244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.444437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.444446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.444639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.444650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.444900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.444910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.445126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.445136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.445298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.445308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.445493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.445502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.445791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.445801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.446113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.446124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.446507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.446517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.446813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.446822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.447116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.447126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.447456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.447466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.447678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.447689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.447991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.448000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.448308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.448318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.448614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.448624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.448816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.448826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.449127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.449137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.449338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.449347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.449689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.449699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.450050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.450060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.450444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.450454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.450735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.450744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.451023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.451033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.451330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.451340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.451515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.451526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.451870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.451880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.452166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.452177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.452502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.452512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.452803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.452813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.453126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.453136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.453439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.453449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.453740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.453750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.454003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.760 [2024-12-06 18:03:33.454013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.760 qpair failed and we were unable to recover it. 00:26:45.760 [2024-12-06 18:03:33.454331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.454342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.454632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.454643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.454928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.454939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.455266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.455276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.455613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.455623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.455906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.455918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.456285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.456297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.456582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.456592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.456868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.456878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.457165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.457175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.457481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.457491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.457657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.457666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.457935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.457945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.458227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.458237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.458573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.458582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.458871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.458880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.459169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.459179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.459464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.459474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.459770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.459780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.459957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.459968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.460289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.460299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.460587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.460596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.460880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.460890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.461174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.461184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.461466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.461476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.461760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.461770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.462107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.462118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.761 [2024-12-06 18:03:33.462437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.761 [2024-12-06 18:03:33.462447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.761 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.462735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.462745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.462897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.462907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.463205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.463215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.463500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.463510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.463803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.464085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.464095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.464422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.464434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.464741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.464751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.464945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.464956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.465319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.465330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.465592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.465602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.465912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.465923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.466280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.466291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.466570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.466580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.466899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.466909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.467257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.467268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.467599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.467609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.467807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.467817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.468062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.468073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.468374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.468384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.468589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.468901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.468911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.469209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.469220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.469590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.469599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.469889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.469898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.470189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.470199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.470537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.470547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.470854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.470863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.471150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.471161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.471487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.471497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.471847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.471856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.472142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.472152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.472326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.472337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.472666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.762 [2024-12-06 18:03:33.472675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.762 qpair failed and we were unable to recover it. 00:26:45.762 [2024-12-06 18:03:33.472954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.472964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.473281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.473291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.473618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.473628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.473920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.473930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.474213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.474224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.474525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.474535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.474815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.474824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.475122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.475132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.475448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.475458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.475749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.475758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.476050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.476059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.476232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.476244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.476547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.476557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.476896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.476905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.477192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.477202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.477523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.477533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.477822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.477832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.478147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.478157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.478470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.478479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.478642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.478652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.478933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.478942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.479240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.479250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.479622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.479632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.479910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.479920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.480195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.480205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.480510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.480520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.480714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.480724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.481054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.481064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.481386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.481697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.481706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.482010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.482020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.482205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.482216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.482387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.482397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.482567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.482576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.763 [2024-12-06 18:03:33.482742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.763 [2024-12-06 18:03:33.482753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.763 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.483061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.483071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.483379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.483389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.483674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.483966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.483978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.484252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.484262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.484461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.484472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.484783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.484793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.485085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.485095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.485393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.485403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.485746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.486041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.486051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.486227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.486237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.486414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.486424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.486743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.486753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.487076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.487392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.487402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.487678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.487687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.487852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.487862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.488146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.488157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.488470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.488480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.488807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.488817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.489135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.489145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.489460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.489741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.489750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.490026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.490036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.490228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.490237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.490554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.490839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.490849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.491175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.491186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.491361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.491371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.491709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.491721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.492071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.492081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.764 qpair failed and we were unable to recover it. 00:26:45.764 [2024-12-06 18:03:33.492392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.764 [2024-12-06 18:03:33.492402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.492677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.492686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.492986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.492996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.493293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.493303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.493594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.493604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.493931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.493941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.494255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.494265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.494399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.494410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.494632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.494642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.494940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.494950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.495278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.495289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.495565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.495574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.495876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.495886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.496179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.496189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.496494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.496504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.496735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.496744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.497036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.497046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.497335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.497346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.497661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.497670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.497941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.497950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.498254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.498264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.498553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.498562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.498938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.498947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.499257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.499267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.499548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.499558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.499844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.499854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.500165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.500175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.500483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.500493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.500814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.500824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.501116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.501126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.501422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.501432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.501709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.501719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.501999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.502009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.502335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.502346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.502682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.765 [2024-12-06 18:03:33.502691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.765 qpair failed and we were unable to recover it. 00:26:45.765 [2024-12-06 18:03:33.503004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.503013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.503315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.503325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.503602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.503910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.503919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.504210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.504220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.504541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.504551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.504871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.504881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.505169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.505180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.505499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.505509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.505708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.505717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.506025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.506034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.506365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.506375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.506701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.506711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.506987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.506997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.507318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.507328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.507684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.507693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.507965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.507975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.508320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.508330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.508626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.508636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.508945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.508954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.509257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.509267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.509578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.509587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.509908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.509917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.510189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.510199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.510542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.510552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.510961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.510971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.511285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.511295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.511568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.766 [2024-12-06 18:03:33.511577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.766 qpair failed and we were unable to recover it. 00:26:45.766 [2024-12-06 18:03:33.511858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.511868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.512168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.512178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.512478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.512487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.512838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.512849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.513143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.513154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.513441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.513450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.513774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.513784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.513986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.513996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.514313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.514323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.514628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.514638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.514931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.514940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.515260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.515270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.515597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.515606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.515891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.515901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.516190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.516200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.516534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.516543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.516812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.516822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.517106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.517116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.517351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.517360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.517653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.517663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.517948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.517957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.518269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.518280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.518624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.518633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.518928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.518937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.519228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.519238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.519552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.519562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.519882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.519891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.520225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.520521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.520531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.520822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.520832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.521140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.521154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.521449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.521459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.521749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.767 [2024-12-06 18:03:33.521759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.767 qpair failed and we were unable to recover it. 00:26:45.767 [2024-12-06 18:03:33.522094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.522109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.522426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.522437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.522719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.522729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.522896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.522908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.523193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.523203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.523499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.523509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.523797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.523806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.524131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.524142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.524448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.524459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.524744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.524754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.525044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.525054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.525353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.525363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.525663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.525674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.525866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.526190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.526201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.526513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.526523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.526715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.526725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.527041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.527051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.527340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.527351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.527526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.527537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.527856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.527866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.528061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.528072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.528396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.528407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.528709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.528719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.529001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.529011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.529313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.529324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.529640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.529651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.529829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.529839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.530118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.530438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.530449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.530760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.530771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.530967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.530978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.531260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.531270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.531586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.531596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.768 [2024-12-06 18:03:33.531887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.768 qpair failed and we were unable to recover it. 00:26:45.768 [2024-12-06 18:03:33.532058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.532068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.532384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.532395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.532704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.532714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.533017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.533320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.533331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.533733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.533743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.534041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.534051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.534356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.534367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.534720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.534731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.535027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.535037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.535335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.535347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.535652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.535662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.535831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.535842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.536145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.536156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.536445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.536773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.536783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.537130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.537141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.537444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.537455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.537640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.537967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.537977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.538220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.538230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.538538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.538549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.538850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.538860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.539149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.539160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.539338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.539350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.539605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.539616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.539935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.539945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.540252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.540262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.540596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.540606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.540994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.541003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.541350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.541363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.541690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.541700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.541982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.541991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.769 [2024-12-06 18:03:33.542211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.769 [2024-12-06 18:03:33.542222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.769 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.542517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.542527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.542811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.542821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.543159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.543169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.543505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.543515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.543831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.543843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.544130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.544141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.544466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.544753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.544763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.545046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.545056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.545398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.545408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.545736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.545746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.546126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.546136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.546438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.546448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.546746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.546757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.547066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.547075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.547386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.547397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.547722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.547732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.548016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.548027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.548330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.548341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.548688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.548698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.548983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.548994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.549334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.549344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.549652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.549662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.549968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.549981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.550168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.550180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.550500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.550510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.550801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.550811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.551095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.551120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.551446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.551457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.551765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.551775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.552106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.552117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.552449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.552459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.552734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.552745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.770 qpair failed and we were unable to recover it. 00:26:45.770 [2024-12-06 18:03:33.553015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.770 [2024-12-06 18:03:33.553025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:45.771 [2024-12-06 18:03:33.553214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.771 [2024-12-06 18:03:33.553225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:45.771 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.553392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.553404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.553714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.553726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.554021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.554220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.554232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.554560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.554570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.554855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.554865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.555144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.044 [2024-12-06 18:03:33.555155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.044 qpair failed and we were unable to recover it. 00:26:46.044 [2024-12-06 18:03:33.555484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.555495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.555833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.555842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.556035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.556046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.556220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.556231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.556513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.556524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.556831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.557259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.557271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.557605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.557616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.557931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.557944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.558261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.558272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.558581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.558592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.558901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.558911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.559111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.559123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.559521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.559531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.559829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.559839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.560150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.560160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.560466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.560477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.560687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.560698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.561008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.561018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.561395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.561406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.561610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.561621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.561943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.561953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.562140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.562152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.562477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.562486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.562823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.562834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.563119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.563129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.563430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.563440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.563770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.563780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.564073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.564084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.564408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.564419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.564708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.564718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.565029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.565339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.565350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.565661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.565671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.565944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.565953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.566122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.566134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.566503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.566513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.566836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.566845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.045 [2024-12-06 18:03:33.567084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.045 [2024-12-06 18:03:33.567094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.045 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.567446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.567456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.567769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.567778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.568119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.568129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.568309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.568320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.568702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.568712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.569030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.569040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.569361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.569371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.569652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.569661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.569970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.569979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.570350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.570360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.570536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.570547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.570837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.570846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.571161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.571172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.571500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.571510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.571832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.571841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.572129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.572348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.572358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.572634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.572644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.573015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.573025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.573323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.573333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.573503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.573514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.573786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.573796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.574113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.574123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.574310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.574320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.574631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.574640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.574947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.574956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.575280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.575290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.575605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.575615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.575930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.575940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.576217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.576227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.576424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.576434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.576733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.576743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.577045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.577054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.577396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.577406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.577702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.577711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.578004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.578013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.578361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.578371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.046 [2024-12-06 18:03:33.578671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.046 [2024-12-06 18:03:33.578682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.046 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.578970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.578980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.579159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.579169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.579445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.579454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.579634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.579644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.579852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.579862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.580175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.580185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.580491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.580501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.580793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.580803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.581116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.581126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.581450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.581460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.581758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.581768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.582053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.582062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.582372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.582382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.582676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.582968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.582978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.583268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.583278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.583671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.583680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.583864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.583875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.584196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.584206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.584398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.584408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.584736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.584746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.585082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.585091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.585420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.585430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.585594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.585603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.585794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.585803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.586103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.586114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.586392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.586404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.586767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.586777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.587063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.587072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.587367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.587379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.587679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.587689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.587870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.587880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.588204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.588214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.588512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.588522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.588806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.588815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.589131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.589141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.589443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.589453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.589736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.047 [2024-12-06 18:03:33.589746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.047 qpair failed and we were unable to recover it. 00:26:46.047 [2024-12-06 18:03:33.590028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.590038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.590410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.590420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.590739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.590749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.590960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.590970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.591142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.591153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.591358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.591367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.591673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.591683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.591868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.591877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.592052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.592062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.592369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.592379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.592670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.592680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.593033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.593043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.593334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.593344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.593726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.593735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.594023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.594032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.594339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.594349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.594690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.594700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.594994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.595003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.595167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.595178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.595486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.595496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.595849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.595859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.596141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.596151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.596395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.596405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.596698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.596707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.597015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.597024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.597340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.597631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.597640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.597914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.597924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.598213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.598223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.598532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.598542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.598831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.598841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.599149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.599160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.599461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.599471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.599814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.599823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.600120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.600419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.600429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.600737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.600747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.601035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.601045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.048 qpair failed and we were unable to recover it. 00:26:46.048 [2024-12-06 18:03:33.601355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.048 [2024-12-06 18:03:33.601366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.601675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.601684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.601990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.601999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.602193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.602202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.602469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.602478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.602781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.602791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.603109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.603119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.603415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.603424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.603772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.603782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.604065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.604074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.604409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.604420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.604726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.604735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.605056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.605065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.605233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.605243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.605550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.605559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.605893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.605903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.606186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.606196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.606368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.606378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.606704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.606716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.607034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.607043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.607417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.607427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.607717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.607727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.608008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.608018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.608353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.608363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.608647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.608657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.608952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.608962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.609245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.609255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.609428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.609437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.609669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.609679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.609991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.610001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.610331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.610341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.610627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.610636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.049 qpair failed and we were unable to recover it. 00:26:46.049 [2024-12-06 18:03:33.610837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.049 [2024-12-06 18:03:33.610847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.611146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.611157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.611470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.611479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.611763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.611772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.612061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.612071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.612350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.612360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.612656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.612843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.612853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.613085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.613094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.613401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.613410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.613743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.613753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.614051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.614061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.614409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.614419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.614600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.614613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.614928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.614937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.615242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.615252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.615554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.615564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.615855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.615865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.616239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.616249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.616559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.616569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.616896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.616905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.617204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.617214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.617500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.617509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.617664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.617675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.618042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.618052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.618370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.618380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.618720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.618730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.619012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.619022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.619348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.619358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.619655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.619665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.619862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.619872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.620180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.620190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.620488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.620498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.620793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.620803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.621104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.621114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.621416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.621425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.621733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.621743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.622071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.622081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.622367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.050 [2024-12-06 18:03:33.622377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.050 qpair failed and we were unable to recover it. 00:26:46.050 [2024-12-06 18:03:33.622575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.622584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.623011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.623023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.623223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.623233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.623551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.623561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.623909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.623919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.624205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.624554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.624563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.624872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.624882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.625190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.625200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.625496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.625505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.625805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.625814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.626107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.626117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.626456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.626466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.626778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.626787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.627070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.627079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.627422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.627433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.627721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.627731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.627906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.627917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.628137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.628148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.628451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.628461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.628742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.628752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.629029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.629038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.629237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.629247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.629552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.629561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.629837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.629847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.630127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.630137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.630331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.630342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.630642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.630652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.630986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.630996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.631310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.631321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.631626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.631636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.631907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.631917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.632197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.632207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.632376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.632387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.632655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.632665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.632858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.632868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.633140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.633150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.633436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.633446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.051 qpair failed and we were unable to recover it. 00:26:46.051 [2024-12-06 18:03:33.633756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.051 [2024-12-06 18:03:33.633765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.634067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.634076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.634241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.634252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.634526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.634536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.634840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.634851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.635036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.635047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.635374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.635384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.635672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.635682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.636005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.636015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.636300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.636310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.636594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.636603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.636997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.637007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.637341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.637351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.637674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.637684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.638012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.638022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.638279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.638289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.638585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.638594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.638877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.638887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.639074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.639085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.639426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.639437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.639622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.639633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.639956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.639966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.640259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.640269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.640554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.640564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.640847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.640857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.641138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.641148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.641469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.641479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.641771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.641781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.642081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.642090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.642551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.642853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.642863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.643034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.643049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.643336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.643347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.643643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.643652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.643947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.643956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.644249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.644259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.644546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.644556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.644841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.644851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.052 [2024-12-06 18:03:33.645134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.052 [2024-12-06 18:03:33.645144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.052 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.645447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.645456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.645739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.645749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.646024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.646033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.646350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.646360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.646538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.646548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.646849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.646859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.647189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.647199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.647368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.647379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.647702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.647712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.647888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.647898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.648217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.648227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.648538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.648547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.648850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.648859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.649140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.649151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.649489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.649499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.649660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.649671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.649986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.649996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.650184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.650195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.650389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.650399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.650725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.650736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.651053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.651063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.651361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.651372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.651696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.651705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.651884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.651894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.652196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.652206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.652501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.652510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.652843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.652853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.653059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.653069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.653370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.653380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.653677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.653686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.654043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.654052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.654374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.654384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.654670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.654679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.655005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.053 [2024-12-06 18:03:33.655015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.053 qpair failed and we were unable to recover it. 00:26:46.053 [2024-12-06 18:03:33.655305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.655315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.655600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.655609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.655961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.655971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.656283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.656294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.656582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.656592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.656886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.656896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.657210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.657221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.657499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.657509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.657803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.657812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.658111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.658121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.658407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.658417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.658701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.658710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.658880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.658889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.659164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.659174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.659456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.659465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.659749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.659759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.659939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.659949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.660242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.660253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.660421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.660432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.660711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.660721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.661003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.661013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.661314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.661324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.661652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.661661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.661981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.661991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.662281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.662291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.662575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.662584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.662863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.662873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.663199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.663209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.663525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.663535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.663848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.663858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.664173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.664183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.664502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.664512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.664793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.664803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.665087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.665097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.665457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.665467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.665793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.665803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.666088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.666097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.054 [2024-12-06 18:03:33.666397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.054 [2024-12-06 18:03:33.666407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.054 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.666746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.666756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.667020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.667030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.667333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.667343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.667518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.667528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.667803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.667813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.668150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.668160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.668460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.668470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.668764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.668773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.669098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.669112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.669420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.669429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.669711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.669721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.670034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.670043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.670346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.670357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.670658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.670668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.670954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.670964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.671306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.671318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.671599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.671609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.671910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.671920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.672237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.672247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.672529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.672538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.672872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.672882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.673177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.673187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.673486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.673496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.673775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.673785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.674069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.674079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.674365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.674375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.674658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.674668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.675038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.675047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.675397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.675407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.675703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.675713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.676024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.676034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.676384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.676395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.676706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.677069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.677079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.677248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.677259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.677533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.677543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.677860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.677869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.678214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.055 [2024-12-06 18:03:33.678224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.055 qpair failed and we were unable to recover it. 00:26:46.055 [2024-12-06 18:03:33.678528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.678537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.678821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.678831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.679121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.679131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.679455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.679465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.679812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.679824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.680121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.680131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.680447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.680457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.680738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.680748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.681098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.681112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.681273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.681283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.681517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.681527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.681821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.681831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.682164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.682174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.682456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.682466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.682749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.682759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.683041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.683051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.683347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.683357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.683574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.683584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.683895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.683905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.684222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.684232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.684576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.684586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.684758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.684768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.685042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.685051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.685398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.685813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.685822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.686148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.686158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.686473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.686483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.686770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.686779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.687095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.687114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.687473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.687482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.687766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.687776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.688059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.688071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.688247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.688259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.688530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.688540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.688819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.688829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.689145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.689155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.689451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.689461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.689666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.689676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.056 qpair failed and we were unable to recover it. 00:26:46.056 [2024-12-06 18:03:33.690006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.056 [2024-12-06 18:03:33.690015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.690408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.690419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.690716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.690725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.691008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.691017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.691322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.691332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.691629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.691639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.691933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.691942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.692238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.692249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.692460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.692470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.692648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.692659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.692952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.692961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.693308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.693318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.693594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.693604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.693899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.693909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.694253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.694263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.694569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.694578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.694873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.694882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.695204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.695214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.695457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.695466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.695770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.695779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.695983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.695993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.696284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.696295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.696486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.696495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.696843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.696852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.697136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.697146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.697469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.697478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.697777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.697787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.698081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.698091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.698395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.698405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.698703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.698713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.699010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.699019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.699348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.699358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.699647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.699657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.699960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.699969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.700144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.700156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.700346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.700355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.700695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.700704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.700910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.057 [2024-12-06 18:03:33.700919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.057 qpair failed and we were unable to recover it. 00:26:46.057 [2024-12-06 18:03:33.701131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.701141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.701467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.701477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.701766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.701776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.702057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.702067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.702373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.702383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.702581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.702591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.702918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.702928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.703240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.703250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.703597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.703607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.703928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.703938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.704269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.704280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.704606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.704616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.704973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.704983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.705270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.705280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.705586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.705596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.705909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.705919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.706238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.706413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.706424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.706694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.706704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.707007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.707017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.707318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.707329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.707499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.707509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.707777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.707787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.708109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.708122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.708341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.708350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.708669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.708679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.708982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.708991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.709283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.709294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.709585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.709595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.709878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.709888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.710096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.710110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.710405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.710415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.710697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.710706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.710991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.058 [2024-12-06 18:03:33.711000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.058 qpair failed and we were unable to recover it. 00:26:46.058 [2024-12-06 18:03:33.711287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.711298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.711613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.711622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.711909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.711919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.712202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.712213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.712499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.712508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.712792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.712801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.713140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.713150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.713418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.713428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.713735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.713744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.714032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.714042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.714366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.714376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.714656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.714666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.714925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.714935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.715234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.715245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.715590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.715600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.715884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.715894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.716177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.716189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.716478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.716488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.716780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.716791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.717119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.717130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.717467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.717477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.717768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.717777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.718070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.718080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.718371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.718380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.718699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.718709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.718906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.718916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.719235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.719245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.719532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.719542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.719707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.719718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.720033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.720042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.720328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.720339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.720660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.720669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.720953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.721311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.721321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.721615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.721624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.721970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.721980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.722146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.722157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.722444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.722454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.059 qpair failed and we were unable to recover it. 00:26:46.059 [2024-12-06 18:03:33.722741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.059 [2024-12-06 18:03:33.722751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.722983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.722993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.723276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.723286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.723588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.723597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.723896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.723905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.724188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.724199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.724476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.724486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.724762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.724772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.725034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.725043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.725384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.725394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.725679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.725689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.725992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.726002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.726324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.726334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.726534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.726544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.726860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.726869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.727178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.727188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.727482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.727491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.727772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.727782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.728073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.728083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.728412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.728422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.728707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.728717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.729004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.729014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.729315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.729325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.729618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.729627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.729937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.729946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.730233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.730242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.730613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.730622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.730925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.730935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.731234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.731245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.731542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.731552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.731783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.731793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.732136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.732146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.732490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.732499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.732789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.732799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.733094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.733108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.733487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.733497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.733776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.733786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.734111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.734121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.060 [2024-12-06 18:03:33.734410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.060 [2024-12-06 18:03:33.734420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.060 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.734741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.734751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.735052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.735063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.735415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.735425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.735713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.735723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.736012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.736021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.736201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.736210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.736511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.736521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.736863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.736874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.737158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.737168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.737468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.737477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.737821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.737830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.738126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.738137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.738462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.738471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.738765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.738775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.739049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.739059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.739400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.739410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.739734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.739743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.740022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.740032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.740324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.740334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.740601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.740610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.740930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.740940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.741253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.741264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.741546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.741556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.741864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.741873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.742068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.742077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.742404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.742415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.742727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.742737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.743043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.743053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.743355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.743365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.743677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.743687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.743880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.743890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.744073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.744082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.744447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.744457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.744770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.744780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.745070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.745081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.745449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.745459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.745630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.745640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.061 [2024-12-06 18:03:33.745956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.061 [2024-12-06 18:03:33.745965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.061 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.746274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.746285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.746616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.746625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.746911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.746920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.747207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.747217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.747568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.747577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.747915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.747925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.748208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.748218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.748523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.748532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.748708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.748719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.749001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.749270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.749280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.749479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.749489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.749801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.749810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.750112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.750122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.750506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.750516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.750822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.750832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.751154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.751164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.751526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.751536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.751816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.751826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.752126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.752136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.752331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.752342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.752630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.752640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.752935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.752944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.753225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.753238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.753555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.753565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.753847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.753856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.754180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.754191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.754490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.754499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.754792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.754801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.755085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.755095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.755383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.755392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.755759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.755769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.756069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.756368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.756378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.756706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.756715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.757000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.757010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.757312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.062 [2024-12-06 18:03:33.757322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.062 qpair failed and we were unable to recover it. 00:26:46.062 [2024-12-06 18:03:33.757602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.757612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.757903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.757913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.758226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.758236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.758539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.758549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.758833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.758843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.759042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.759052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.759240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.759251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.759545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.759555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.759845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.759854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.760203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.760213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.760513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.760523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.760815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.760824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.761112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.761122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.761421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.761431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.761732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.761742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.762047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.762057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.762393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.762404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.762669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.762679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 wit/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3215698 Killed "${NVMF_APP[@]}" "$@" 00:26:46.063 h addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.763012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.763316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.763326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:46.063 [2024-12-06 18:03:33.763651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.763661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:46.063 [2024-12-06 18:03:33.763975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.763986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:46.063 [2024-12-06 18:03:33.764335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.764346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.063 [2024-12-06 18:03:33.764633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.764643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.063 [2024-12-06 18:03:33.764958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.764973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.765269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.765279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.765575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.765585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.765872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.765881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.766217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.766227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.063 qpair failed and we were unable to recover it. 00:26:46.063 [2024-12-06 18:03:33.766407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.063 [2024-12-06 18:03:33.766423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.766743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.766753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.767059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.767068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.767486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.767497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.767799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.767809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.768151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.768162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.768383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.768393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.768688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.768698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.768997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.769007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.769315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.769325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.769526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.769535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.769873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.769882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.770077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.770087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.770401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.770411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3216735 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3216735 00:26:46.064 [2024-12-06 18:03:33.770772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.770782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3216735 ']' 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.064 [2024-12-06 18:03:33.771098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.771114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.064 18:03:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.064 [2024-12-06 18:03:33.771331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.771341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.771661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.771671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.771755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.771765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.772066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.772076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.772272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.772283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.772565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.772576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.772890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.772900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.773061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.773073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.773322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.773333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.773650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.773660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.773976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.773987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.774175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.774187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.774529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.774540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.774814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.774824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.775121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.775134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.775466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.775479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.775806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.775816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.776140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.064 [2024-12-06 18:03:33.776151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.064 qpair failed and we were unable to recover it. 00:26:46.064 [2024-12-06 18:03:33.776361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.776371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.776703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.776714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.777016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.777027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.777331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.777341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.777651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.777661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.777969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.777979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.778299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.778310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.778615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.778912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.778922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.779139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.779150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.779339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.779349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.779688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.779698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.780025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.780036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.780345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.780356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.780661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.780672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.780997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.781007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.781270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.781281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.781477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.781488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.781810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.781820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.781960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.781970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.782290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.782301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.782625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.782636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.782932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.783259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.783270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.783572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.783584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.783904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.783914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.784116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.784127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.784321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.784331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.784652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.784662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.784940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.784951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.785258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.785269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.785453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.785463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.785743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.785754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.065 [2024-12-06 18:03:33.786064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.065 [2024-12-06 18:03:33.786074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.065 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.786256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.786266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.786587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.786598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.786969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.786980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.787342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.787353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.787518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.787528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.787861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.787871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.788090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.788105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.788440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.788450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.788749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.788759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.789062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.789073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.789434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.789446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.789612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.789623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.789942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.789952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.790124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.790136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.790326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.790650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.790661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.790977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.790988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.791189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.791200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.791488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.791499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.791818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.791829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.792181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.792192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.792383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.792393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.792747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.792758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.793056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.793067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.793451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.793745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.793756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.794070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.794080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.794444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.794454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.794750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.794760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.794973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.794983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.795295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.795305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.795478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.795488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.795926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.795937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.796261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.796272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.796620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.796630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.796827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.066 [2024-12-06 18:03:33.796837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.066 qpair failed and we were unable to recover it. 00:26:46.066 [2024-12-06 18:03:33.797150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.797160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.797328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.797338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.797632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.797642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.797992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.798003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.798325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.798336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.798625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.798635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.798944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.798954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.799133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.799143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.799518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.799529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.799823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.799834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.800143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.800154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.800328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.800339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.800548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.800558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.800914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.800925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.801244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.801255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.801647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.801657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.801957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.801968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.802268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.802280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.802570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.802581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.802897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.802909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.803145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.803157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.803483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.803493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.803807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.803819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.804141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.804152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.804552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.804562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.804887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.804897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.805201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.805212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.805540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.805732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.805744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.806067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.806077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.806163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.806173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.806236] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:26:46.067 [2024-12-06 18:03:33.806280] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.067 [2024-12-06 18:03:33.806452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.806461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.806645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.806656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.067 [2024-12-06 18:03:33.806841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.067 [2024-12-06 18:03:33.806851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.067 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.807132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.807143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.807499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.807510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.807795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.807807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.808159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.808170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.808512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.808523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.808838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.808849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.809158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.809169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.809536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.809547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.809727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.809739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.810057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.810068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.810477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.810488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.810782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.810792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.811089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.811111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.811425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.811436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.811766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.811777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.812076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.812087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.812405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.812795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.812805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.812995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.813005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.813206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.813216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.813540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.813551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.813741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.813751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.813940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.813950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.814107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.814117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.814469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.814480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.814791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.814801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.815111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.815122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.815528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.815538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.068 [2024-12-06 18:03:33.815723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.068 [2024-12-06 18:03:33.815735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.068 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.816030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.816040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.816371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.816382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.816723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.816733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.816932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.816943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.817251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.817263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.817573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.817584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.817897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.817907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.818050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.818059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.818383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.818394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.818680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.818690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.818859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.818869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.819145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.819157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.819477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.819489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.819776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.819786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.820091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.820108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.820447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.820457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.820751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.820761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.821044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.821054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.821243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.821254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.821551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.821561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.821852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.821862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.822170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.822180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.822358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.822368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.822692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.822702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.822977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.822987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.823043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.823052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.823345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.823356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.823655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.823665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.823845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.823855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.823931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.823940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.824098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.824112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.824297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.824306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.824667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.824677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.824852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.824862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.825052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.825061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.825403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.825413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.069 [2024-12-06 18:03:33.825700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.069 [2024-12-06 18:03:33.825710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.069 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.826017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.826027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.826341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.826352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.826582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.826594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.826926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.826937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.827249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.827259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.827556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.827566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.827849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.827859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.828059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.828070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.828275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.828285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.828484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.828493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.828693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.828703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.829023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.829033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.829332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.829342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.829703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.829713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.829900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.829910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.830082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.830092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.830369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.830380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.830679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.830689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.831014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.831025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.831370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.831381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.831711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.831720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.832006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.832016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.832422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.832433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.832616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.832626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.832809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.832820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.833131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.833142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.833543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.833553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.833873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.833883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.834200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.834210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.834548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.834558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.834756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.834767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.835155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.835166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.835528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.835538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.835721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.835732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.070 [2024-12-06 18:03:33.836019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.070 [2024-12-06 18:03:33.836029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.070 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.836398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.836409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.836702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.836712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.837097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.837112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.837423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.837434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.837763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.837773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.838060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.838070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.838421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.838432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.838598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.838608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.838812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.838821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.839152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.839162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.839484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.839493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.839813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.839823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.840143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.840153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.840488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.840499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.840800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.840811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.841123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.841133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.841567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.841578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.841912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.841922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.842233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.842243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.842552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.842562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.842880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.843230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.843240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.843546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.843557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.843939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.843949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.844258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.844270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.844568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.844579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.844891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.844902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.845121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.845132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.845314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.845323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.845628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.845642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.845819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.845829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.846115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.846126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.846459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.846470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.846765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.846774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.847107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.847118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.847288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.847302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.847627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.847637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.071 qpair failed and we were unable to recover it. 00:26:46.071 [2024-12-06 18:03:33.847787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.071 [2024-12-06 18:03:33.847798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.848094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.848109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.848483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.848493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.848833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.848844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.849192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.849203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.849584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.849594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.849926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.849935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.850251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.850261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.850460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.850469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.850759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.850769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.851065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.851075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.851139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.851485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.851495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.851845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.851855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.852210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.852220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.852546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.852557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.852867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.852876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.853075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.853085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.853423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.853433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.853752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.853762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.853891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.853901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.854192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.854203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.854553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.854563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.854893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.854903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.855171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.855182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.855507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.855569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.855577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.855890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.855899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.856107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.856119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.856337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.856346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.856689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.856699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.072 [2024-12-06 18:03:33.857025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.072 [2024-12-06 18:03:33.857034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.072 qpair failed and we were unable to recover it. 00:26:46.347 [2024-12-06 18:03:33.857397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.347 [2024-12-06 18:03:33.857409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.347 qpair failed and we were unable to recover it. 00:26:46.347 [2024-12-06 18:03:33.857707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.347 [2024-12-06 18:03:33.857717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.347 qpair failed and we were unable to recover it. 00:26:46.347 [2024-12-06 18:03:33.857895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.347 [2024-12-06 18:03:33.857905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.858214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.858224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.858531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.858541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.858739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.858749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.859114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.859125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.859430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.859441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.859746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.859756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.860081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.860091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.860460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.860471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.860649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.860658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.860956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.860965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.861286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.861296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.861626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.861636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.862068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.862077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.862425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.862436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.862771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.862781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.863110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.863120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.863418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.863428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.863608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.863622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.863927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.863937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.864274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.864284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.864449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.864459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.864843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.864852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.865179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.865189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.865513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.865812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.865822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.865995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.866006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.866317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.866328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.866624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.866633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.866931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.348 [2024-12-06 18:03:33.866940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.348 qpair failed and we were unable to recover it. 00:26:46.348 [2024-12-06 18:03:33.867184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.867195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.867543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.867553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.867860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.867870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.868221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.868231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.868422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.868431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.868731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.868740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.868946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.868956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.869165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.869175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.869506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.869516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.869809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.869819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.870115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.870125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.870424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.870434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.870805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.870815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.871157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.871167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.871465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.871475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.871679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.871689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.871992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.872003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.872323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.872334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.872635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.872646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.872849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.872858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.873151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.873161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.873358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.873368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.873629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.873639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.873964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.873974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 [2024-12-06 18:03:33.874127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.349 [2024-12-06 18:03:33.874138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.349 qpair failed and we were unable to recover it. 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.349 starting I/O failed 00:26:46.349 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Read completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 Write completed with error (sct=0, sc=8) 00:26:46.350 starting I/O failed 00:26:46.350 [2024-12-06 18:03:33.874688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:46.350 [2024-12-06 18:03:33.875096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.875164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.875712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.875780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff248000b90 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.876135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.876146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.876454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.876464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.876838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.876848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.877175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.877186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.877540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.877550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.877861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.877871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.878187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.878197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.878510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.878520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.878816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.878827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.879004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.879014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.879328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.879339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.879628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.879638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.879977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.879987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.880348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.880360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.880555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.880565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.880870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.880880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.881207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.881217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.350 [2024-12-06 18:03:33.881538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.350 [2024-12-06 18:03:33.881548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.350 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.881738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.881748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.882027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.882038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.882351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.882362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.882647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.882657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.882963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.882976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.883144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.883155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.883494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.883504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.883883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.883893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.884189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.884200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.884523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.884533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.884908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.885106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.885116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.885401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.885411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.885729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.885739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.886053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.886063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.886377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.886388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.886554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.886563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.886870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.886880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.887231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.887241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.887572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.887582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.887893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.887903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.888212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.888223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.888448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.888457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.888774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.888783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.888833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.888843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.889173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.889183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.889486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.889496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.889669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.351 [2024-12-06 18:03:33.889678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.351 qpair failed and we were unable to recover it. 00:26:46.351 [2024-12-06 18:03:33.889860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.889870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.890187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.890197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.890515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.890526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.890867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.890880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.891200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.891210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.891522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.891531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.891877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.891887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.892190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.892200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.892388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.892399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.892606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.892973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.892983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.893337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.893347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.893473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.352 [2024-12-06 18:03:33.893715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.893725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.893921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.893930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.894242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.894253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.894431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.894440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.894635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.894644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.894965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.894975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.895155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.895166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.895468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.895478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.895791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.895801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.896116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.896127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.896470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.896480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.896840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.896850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.897160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.897170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.897379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.897389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.897809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.897819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.898031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.898041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.898236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.352 [2024-12-06 18:03:33.898639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.352 [2024-12-06 18:03:33.898649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.352 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.898957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.898969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.899157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.899167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.899449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.899459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.899723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.899733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.900066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.900075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.900364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.900374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.900676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.900687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.900984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.900994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.901173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.901183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.901519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.901529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.901707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.901717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.901999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.902008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.902361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.902371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.902676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.902688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.902972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.902982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.903310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.903320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.903672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.904015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.904025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.904239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.904250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.904539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.904549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.904914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.904924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.905228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.905239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.905616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.905627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.905911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.905921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.906129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.906141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.353 qpair failed and we were unable to recover it. 00:26:46.353 [2024-12-06 18:03:33.906578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.353 [2024-12-06 18:03:33.906588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.906773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.906783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.906967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.906978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.907151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.907161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.907512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.907523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.907725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.907735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.907969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.907979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.908131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.908142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.908316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.908326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.908687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.908696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.908854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.908864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.908911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.908920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.909236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.909247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.909439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.909449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.909633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.909642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.909958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.909968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.910276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.910287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.910650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.910660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.910850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.910860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.911201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.911212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.911390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.911399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.911553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.911563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.911903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.911913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.912289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.912299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.912629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.354 [2024-12-06 18:03:33.912639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.354 qpair failed and we were unable to recover it. 00:26:46.354 [2024-12-06 18:03:33.912965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.912974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.913279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.913289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.913467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.913477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.913777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.913786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.914097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.914116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.914427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.914437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.914776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.914787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.914967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.914977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.915299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.915309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.915633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.915643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.915955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.915965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.916333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.916343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.916721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.916730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.917059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.917069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.917380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.917391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.917703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.917713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.918010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.918020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.918337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.918348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.918683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.918692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.918996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.919006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.919404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.919415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.919719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.919729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.920054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.920064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.920263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.920274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.920619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.920629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.920833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.920843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.921032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.921043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.921218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.921228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.355 [2024-12-06 18:03:33.921550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.355 [2024-12-06 18:03:33.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.355 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.921849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.921859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.922178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.922189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.922559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.922570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.922890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.922900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.923085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.923095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.923404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.923415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.923745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.923756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.923954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.923964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.924305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.924316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.924640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.924652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.924980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.924991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.925235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.925246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.925301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.925310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.925623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.925634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.925946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.925957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.926267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.926278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.926587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.926597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.926907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.926917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.927104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.927115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.927482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.927493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.927553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.927562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.927898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.927908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.928294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.928304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.928617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.928795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.928804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.929240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.929537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.356 [2024-12-06 18:03:33.929548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.356 qpair failed and we were unable to recover it. 00:26:46.356 [2024-12-06 18:03:33.929804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.356 [2024-12-06 18:03:33.929830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.356 [2024-12-06 18:03:33.929838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.356 [2024-12-06 18:03:33.929845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.356 [2024-12-06 18:03:33.929851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.357 [2024-12-06 18:03:33.929905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.929916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.930241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.930253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.930574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.930584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.930868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.930878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.931231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.931242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.931379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:46.357 [2024-12-06 18:03:33.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.931507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:46.357 [2024-12-06 18:03:33.931610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.931638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:46.357 [2024-12-06 18:03:33.931639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:46.357 [2024-12-06 18:03:33.931917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.931927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.932119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.932131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.932401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.932412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.932583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.932593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.932947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.932957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.933142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.933152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.933491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.933502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.933837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.933849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.934036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.934046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.934370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.934659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.934951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.934961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.935314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.935325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.935637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.935647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.935996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.936006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.936190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.936201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.936526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.936537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.936734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.936743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.937051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.937061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.937290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.937300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.937537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.937548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.357 qpair failed and we were unable to recover it. 00:26:46.357 [2024-12-06 18:03:33.937837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.357 [2024-12-06 18:03:33.937847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.938142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.938152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.938344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.938355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.938552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.938563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.938878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.938888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.939182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.939193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.939531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.939835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.939846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.940167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.940178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.940269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.940278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.940570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.940580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.940772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.940782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.941107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.941117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.941398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.941409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.941709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.941719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.941778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.941787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.941993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.942003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.942191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.942201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.942402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.942412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.942784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.942794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.943122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.943133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.943469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.943480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.943839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.943849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.944163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.944174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.944488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.944498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.944893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.944904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.945123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.945133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.945314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.945324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.945696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.358 [2024-12-06 18:03:33.945706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.358 qpair failed and we were unable to recover it. 00:26:46.358 [2024-12-06 18:03:33.945998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.946007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.946193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.946203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.946556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.946568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.946736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.946746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.946989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.946998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.947322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.947332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.947500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.947510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.947794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.947804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.947997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.948007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.948307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.948318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.948357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.948366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.948724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.948738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.949069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.949079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.949403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.949414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.949594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.949604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.949930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.949941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.950246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.950257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.950484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.950495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.950694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.950704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.950905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.950915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.951285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.951297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.951340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.951349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.951665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.951674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.951949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.951960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.952282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.952293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.952466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.952479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.952793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.952804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.953175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.953187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.359 qpair failed and we were unable to recover it. 00:26:46.359 [2024-12-06 18:03:33.953558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.359 [2024-12-06 18:03:33.953569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.953865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.953875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.954058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.954068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.954448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.954459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.954883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.954894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.955094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.955109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.955452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.955462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.955657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.955668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.955990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.956001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.956335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.956654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.956668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.956998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.957009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.957417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.957428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.957728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.957738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.958114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.958125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.958635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.958645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.958952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.958961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.959303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.959313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.959633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.960008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.960017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.960190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.960201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.960535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.960545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.960895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.960905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.360 [2024-12-06 18:03:33.961264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.360 [2024-12-06 18:03:33.961274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.360 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.961531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.961541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.961866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.961876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.962046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.962239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.962249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.962505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.962515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.962837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.963156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.963166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.963341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.963351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.963753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.963764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.963987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.963997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.964183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.964193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.964513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.964523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.964851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.964863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.965263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.965273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.965621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.965631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.965938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.965948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.966325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.966335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.966629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.966640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.966987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.966998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.967167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.967177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.967486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.967496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.967796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.967806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.968159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.968170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.968537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.968547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.968828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.968838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.969038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.969047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.969391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.969402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.969720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.969730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.969922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.969932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.970108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.361 [2024-12-06 18:03:33.970119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.361 qpair failed and we were unable to recover it. 00:26:46.361 [2024-12-06 18:03:33.970486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.970496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.970794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.970805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.970968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.970978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.971349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.971359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.971529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.971539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.971780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.971790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.972121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.972131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.972317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.972326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.972642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.972652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.972830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.972840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.973157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.973168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.973355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.973365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.973502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.973512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.973841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.973851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.974043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.974053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.974384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.974394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.974717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.974727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.975019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.975029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.975333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.975344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.975648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.975658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.975967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.975976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.976296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.976307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.976629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.976639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.976950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.976962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.977267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.977278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.977461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.977471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.977644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.977947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.977957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.978273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.978283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.362 [2024-12-06 18:03:33.978573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.362 [2024-12-06 18:03:33.978582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.362 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.978934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.978944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.979334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.979345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.979539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.979549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.979864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.979873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.980192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.980202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.980408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.980419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.980614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.980624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.980924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.980933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.981266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.981277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.981468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.981478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.981781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.981791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.982129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.982140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.982342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.982353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.982664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.982673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.982993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.983003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.983340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.983350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.983713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.983724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.983917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.983928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.984198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.984533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.984542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.984885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.984897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.985075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.985086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.985413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.985424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.985757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.985766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.985885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.986185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.986196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.986572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.986583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.986769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.986779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.986954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.986964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.363 [2024-12-06 18:03:33.987149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.363 [2024-12-06 18:03:33.987159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.363 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.987488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.987813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.987823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.988114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.988124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.988308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.988319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.988623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.988633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.988965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.988975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.989187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.989197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.989553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.989563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.989923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.989932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.990112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.990122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.990478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.990488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.990813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.990823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.991136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.991147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.991456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.991465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.991809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.991819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.992003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.992014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.992322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.992332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.992652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.992664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.992838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.992848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.993208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.993218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.993623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.993633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.993827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.993837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.994156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.994167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.994495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.994505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.994548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.994557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.994863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.994873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.995121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.995132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.995327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.995337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.995559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.995568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.364 [2024-12-06 18:03:33.995750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.364 [2024-12-06 18:03:33.995760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.364 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.995967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.995978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.996290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.996301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.996612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.996622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.996864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.996874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.997193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.997203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.997493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.997503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.997833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.997843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.998134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.998144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.998454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.998464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.998775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.998785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.998978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.998988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.999150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.999160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.999439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.999449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.999773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.999784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:33.999967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:33.999977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.000278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.000289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.000639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.000649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.000845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.000854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.001098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.001112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.001322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.001332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.001389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.001398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.001595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.001604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.001652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.001662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.002011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.002020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.002408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.002419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.002754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.002765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.003099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.003114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.003392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.003402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.003591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.365 [2024-12-06 18:03:34.003601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.365 qpair failed and we were unable to recover it. 00:26:46.365 [2024-12-06 18:03:34.003777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.003787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.004121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.004131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.004323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.004333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.004496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.004506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.004551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.004560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.004869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.004879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.005217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.005228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.005541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.005551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.005619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.005630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.005800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.005809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.006108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.006119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.006334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.006344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.006740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.006751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.007032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.007042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.007423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.007434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.007774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.007784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.007949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.007961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.008155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.008165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.008376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.008386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.008563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.008573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.008895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.008904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.009183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.009193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.009367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.009377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.366 [2024-12-06 18:03:34.009549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.366 [2024-12-06 18:03:34.009559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.366 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.009868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.009878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.010059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.010069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.010385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.010398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.010561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.010571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.010761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.010772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.010950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.010961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.011151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.011161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.011345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.011355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.011656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.011666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.011976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.011986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.012300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.012311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.012594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.012603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.012912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.012921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.013230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.013241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.013569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.013579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.013888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.013898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.014211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.014222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.014502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.014512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.014804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.014814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.015141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.015152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.015456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.015466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.015789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.015799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.016112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.016122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.016497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.016507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.016713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.016723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.017045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.017054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.017395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.017405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.017724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.017734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.017897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.367 [2024-12-06 18:03:34.017907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.367 qpair failed and we were unable to recover it. 00:26:46.367 [2024-12-06 18:03:34.018199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.018212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.018515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.018525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.018712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.018722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.019030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.019041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.019344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.019355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.019660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.019670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.019966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.019976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.020274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.020285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.020622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.020633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.020920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.020931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.021260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.021271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.021580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.021590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.021774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.021784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.022113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.022125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.022455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.022466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.022758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.022768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.022943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.022954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.023277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.023288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.023465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.023475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.023788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.023798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.024006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.024016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.024372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.024382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.024692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.024702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.025074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.025085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.025294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.025304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.025652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.025663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.025844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.025854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.026043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.026056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.026383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.026394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.026793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.368 [2024-12-06 18:03:34.026803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.368 qpair failed and we were unable to recover it. 00:26:46.368 [2024-12-06 18:03:34.027093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.027107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.027347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.027358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.027708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.028024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.028033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.028361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.028555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.028565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.028738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.028748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.029062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.029073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.029252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.029263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.029620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.029631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.029822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.029833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.030157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.030167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.030565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.030574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.030775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.030786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.031001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.031011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.031373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.031384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.031559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.031569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.031917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.031929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.032119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.032131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.032445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.032456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.032643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.032654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.032938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.032948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.033237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.033248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.033462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.033472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.033823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.033833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.034157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.034168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.034482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.034492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.034671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.034682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.035052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.035063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.035243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.369 [2024-12-06 18:03:34.035254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.369 qpair failed and we were unable to recover it. 00:26:46.369 [2024-12-06 18:03:34.035552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.035562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.035749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.035761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.036070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.036081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.036203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.036213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.036573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.036583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.036916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.036926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.037268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.037279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.037460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.037470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.037519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.037531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.037739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.037749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.038052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.038063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.038243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.038254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.038429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.038439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.038759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.038770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.039053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.039064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.039400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.039411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.039611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.039622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.039796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.039808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.040003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.040013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.040358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.040368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.040551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.040561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.040937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.040947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.041264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.041276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.041451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.041461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.041507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.041517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.041671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.041681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.041931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.041941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.042297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.042308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.042511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.042521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.042836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.042846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.370 [2024-12-06 18:03:34.043154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.370 [2024-12-06 18:03:34.043164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.370 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.043343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.043353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.043693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.043703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.043982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.043992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.044325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.044337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.044502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.044515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.044827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.044839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.045157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.045169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.045483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.045788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.045797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.046109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.046120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.046418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.046429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.046762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.046772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.047047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.047058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.047109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.047121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.047434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.047444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.047611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.047622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.047939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.047950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.048233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.048245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.048436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.048447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.048670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.048681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.049067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.049077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.049407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.049418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.049754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.049764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.050049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.050059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.050422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.050433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.050732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.050743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.371 [2024-12-06 18:03:34.050923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.371 qpair failed and we were unable to recover it. 00:26:46.371 [2024-12-06 18:03:34.051108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.051119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.051295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.051305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.051617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.051628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.051994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.052004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.052321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.052334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.052664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.052674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.052841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.052851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.053185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.053196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.053362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.053374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.053708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.053718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.053883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.053893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.054204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.054215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.054552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.054563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.054867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.054878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.055182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.055194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.055423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.055434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.055764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.055774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.055921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.055931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.056146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.056158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.056347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.056357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.056768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.056779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.057083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.057093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.057427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.057438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.057742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.057752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.058077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.372 [2024-12-06 18:03:34.058087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.372 qpair failed and we were unable to recover it. 00:26:46.372 [2024-12-06 18:03:34.058254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.058264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.058584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.058595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.058759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.058772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.058816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.058826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.059173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.059184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.059514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.059524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.059874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.059884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.060197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.060208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.060376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.060386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.060794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.060805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.061149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.061159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.061493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.061504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.061799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.061809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.062075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.062086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.062378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.062389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.062577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.062588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.062893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.063067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.063077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.063401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.063412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.063709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.063719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.064046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.064056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.064246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.064257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.064296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.064305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.064613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.064623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.065010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.065021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.065341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.065352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.065528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.065537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.065734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.065744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.373 [2024-12-06 18:03:34.065933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.373 [2024-12-06 18:03:34.065942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.373 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.066265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.066276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.066583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.066594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.066913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.066924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.067107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.067119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.067441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.067452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.067782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.067793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.068129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.068139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.068304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.068314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.068629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.068639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.068976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.068986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.069268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.069278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.069656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.069666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.069956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.069966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.070189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.070199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.070370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.070380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.070579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.070588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.070761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.070771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.071106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.071117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.071438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.071450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.071630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.071640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.071977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.071987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.072294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.072305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.072487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.072497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.072828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.072838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.073205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.073215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.073558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.073568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.073904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.073913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.374 [2024-12-06 18:03:34.074081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.374 [2024-12-06 18:03:34.074091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.374 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.074419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.074429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.074725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.074735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.075017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.075027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.075381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.075391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.075567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.075577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.075908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.075918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.076115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.076125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.076222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.076231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.076414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.076424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.076717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.076727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.077042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.077052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.077399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.077410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.077698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.077708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.078005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.078015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.078319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.078329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.078667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.078677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.078968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.078978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.079268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.079281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.079618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.079628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.079920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.079929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.080259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.080269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.080577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.080588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.080923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.080933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.081146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.081156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.081322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.081332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.081504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.081858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.081867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.082271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.082281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.082452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.375 [2024-12-06 18:03:34.082462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.375 qpair failed and we were unable to recover it. 00:26:46.375 [2024-12-06 18:03:34.082795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.082805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.083139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.083149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.083335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.083345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.083658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.083667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.083855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.083866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.084217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.084228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.084571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.084581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.084920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.084930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.085170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.085181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.085498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.085507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.085840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.085851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.086149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.086159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.086501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.086511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.086690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.086699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.086938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.086948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.087273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.087285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.087652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.087662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.087843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.087853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.088186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.088196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.088387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.088397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.088791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.088801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.088845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.088854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.089112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.089122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.089445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.089455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.089744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.089753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.090051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.090062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.090376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.090386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.090696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.090986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.090996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.091177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.091188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.091518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.376 [2024-12-06 18:03:34.091529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.376 qpair failed and we were unable to recover it. 00:26:46.376 [2024-12-06 18:03:34.091820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.091830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.092178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.092188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.092516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.092526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.092847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.092857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.093053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.093063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.093273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.093284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.093687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.093698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.094033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.094043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.094222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.094232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.094545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.094879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.094890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.095108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.095119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.095454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.095464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.095779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.095790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.095961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.095971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.096313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.096324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.096642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.096653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.096849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.096859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.097073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.097084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.097437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.097448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.097766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.097776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.097971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.097981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.098176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.098186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.098416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.098426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.098744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.098754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.099066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.099079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.099400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.099411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.099707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.377 [2024-12-06 18:03:34.099718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.377 qpair failed and we were unable to recover it. 00:26:46.377 [2024-12-06 18:03:34.100007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.100017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.100182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.100192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.100483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.100494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.100666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.100676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.100865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.100874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.101220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.101230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.101408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.101418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.101710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.101719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.102045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.102056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.102379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.102389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.102742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.102753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.102919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.102929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.103251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.103261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.103452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.103462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.103623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.103633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.103955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.103965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.104163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.104174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.104497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.104507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.104837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.104846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.105146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.105156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.105384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.105394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.105581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.105591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.378 qpair failed and we were unable to recover it. 00:26:46.378 [2024-12-06 18:03:34.105872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.378 [2024-12-06 18:03:34.105881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.106230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.106241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.106439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.106451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.106773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.106783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.107098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.107115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.107441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.107451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.107775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.107785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.108094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.108116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.108439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.108449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.108772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.108782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.109198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.109209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.109258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.109268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.109588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.109598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.109758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.109768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.109957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.109968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.110360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.110370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.110774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.110784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.110826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.110836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.111139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.111149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.111336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.111346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.111641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.111651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.112049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.112059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.112416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.112426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.112744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.112754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.113066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.113076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.113372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.113382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.113576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.113586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.113947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.113957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.114231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.114241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.114418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.379 [2024-12-06 18:03:34.114664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.379 [2024-12-06 18:03:34.114675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.379 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.115010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.115020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.115326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.115336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.115666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.115676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.115993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.116004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.116222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.116232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.116432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.116442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.116769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.116780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.117079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.117089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.117395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.117407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.117710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.117720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.117926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.117936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.118263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.118273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.118453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.118463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.118792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.118802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.119185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.119196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.119370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.119380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.119701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.119710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.120064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.120074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.120273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.120284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.120484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.120493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.120705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.120715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.121037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.121047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.121388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.121682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.121692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.121856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.121866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.122157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.122168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.122362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.122372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.380 [2024-12-06 18:03:34.122589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.380 [2024-12-06 18:03:34.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.380 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.122890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.122900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.123227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.123237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.123540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.123550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.123868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.123878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.124074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.124084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.124421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.124432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.124786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.124795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.125086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.125096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.125458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.125468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.125632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.125643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.125685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.125694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.125948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.125958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.126277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.126288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.126619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.126629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.126960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.126971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.127261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.127271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.127552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.127561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.127865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.127875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.128196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.128206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.128517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.128527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.128825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.128836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.129119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.129129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.129317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.129327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.129648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.129658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.130016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.130026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.130209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.130219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.130567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.130576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.130849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.130859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.131149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.131159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.381 qpair failed and we were unable to recover it. 00:26:46.381 [2024-12-06 18:03:34.131470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.381 [2024-12-06 18:03:34.131480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.131786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.131796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.131988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.131998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.132344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.132355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.132672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.132682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.132845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.132855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.133018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.133028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.133249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.133259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.133479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.133489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.133782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.133793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.134115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.134126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.134461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.134471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.134751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.134761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.134819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.134828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.135009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.135019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.135062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.135071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.135418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.135428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.135729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.135740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.135918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.135929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.136252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.136592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.136602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.136957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.136966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.137275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.137286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.137613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.137623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.137960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.138135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.138145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.138368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.138378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.138696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.138706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.382 qpair failed and we were unable to recover it. 00:26:46.382 [2024-12-06 18:03:34.138753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.382 [2024-12-06 18:03:34.138762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.139069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.139079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.139479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.139489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.139655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.139666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.139829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.139838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.140018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.140028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.140339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.140350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.140536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.140548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.140828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.140840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.141168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.141179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.141497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.141510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.141798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.141808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.141851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.141861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.142055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.142065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.142352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.142363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.142631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.142641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.142963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.142973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.143274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.143284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.143617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.143627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.143824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.143835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.143997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.144007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.144304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.144314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.144635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.144645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.144843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.144853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.145159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.145170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.145482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.145491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.145672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.145681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.146006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.146016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.146328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.383 [2024-12-06 18:03:34.146338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.383 qpair failed and we were unable to recover it. 00:26:46.383 [2024-12-06 18:03:34.146642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.146652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.146846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.146857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.147023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.147034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.147348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.147358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.147774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.147784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.147976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.147985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.148177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.148406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.148416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.148722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.148732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.148918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.148930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.149034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.149044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.149239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.149249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.149535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.149544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.149709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.149718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.150110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.150121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.150335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.150344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.150673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.151021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.151031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.151304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.151314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.151631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.151641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.151811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.151821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.384 [2024-12-06 18:03:34.151984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.384 [2024-12-06 18:03:34.151994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.384 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.152292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.152303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.152472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.152482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.152804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.152813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.153033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.153043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.153363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.153374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.153566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.153575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.153913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.153923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.154185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.154195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.154474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.154484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.154691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.154701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.155025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.155035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.155358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.155369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.155761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.155771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.156169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.156180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.156522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.156533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.156870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.156881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.157171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.157183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.157544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.157554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.157597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.157606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.157901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.157911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.158242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.158253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.158555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.158564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.385 [2024-12-06 18:03:34.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.385 [2024-12-06 18:03:34.158890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.385 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.159191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.159202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.159491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.159501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.159791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.159804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.159979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.159989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.160283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.160294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.160583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.160594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.160914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.160924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.161253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.161263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.161457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.161467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.161895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.161904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.162081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.162091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.162392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.162403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.162590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.162599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.162976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.162986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.163289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.163299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.163471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.163480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.163699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.163709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.164051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.164061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.164388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.164398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.164524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.164533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.164876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.164886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.165199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.165209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.165382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.165391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.165645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.165654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.165994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.166005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.166311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.661 [2024-12-06 18:03:34.166321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.661 qpair failed and we were unable to recover it. 00:26:46.661 [2024-12-06 18:03:34.166661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.166671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.166868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.166877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.167225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.167236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.167423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.167435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.167777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.167787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.167977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.167987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.168312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.168322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.168638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.168648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.168947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.168956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.169261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.169273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.169450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.169460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.169773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.169782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.170070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.170080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.170430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.170440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.170734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.170744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.170930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.170940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.171241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.171251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.171568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.171578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.171738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.171748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.171940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.171950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.172255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.172266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.172444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.172453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.172691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.172701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.173070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.173080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.173402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.173412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.173719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.173729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.174119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.174130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.174432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.174442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.662 [2024-12-06 18:03:34.174872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.662 [2024-12-06 18:03:34.174882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.662 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.175086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.175095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.175432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.175448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.175629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.175639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.176032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.176042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.176359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.176370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.176579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.176589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.176946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.176957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.177347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.177357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.177688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.177698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.178002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.178012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.178207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.178218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.178412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.178422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.178784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.178794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.179116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.179127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.179463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.179473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.179793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.179804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.179998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.180008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.180323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.180333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.180624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.180634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.180933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.180943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.181111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.181121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.181433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.181443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.181752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.181761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.182076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.182085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.182287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.182297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.182645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.182655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.182982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.182992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.183185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.183196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.183520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.663 [2024-12-06 18:03:34.183530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.663 qpair failed and we were unable to recover it. 00:26:46.663 [2024-12-06 18:03:34.183845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.183855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.184148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.184158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.184344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.184354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.184559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.184568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.184810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.184819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.185236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.185247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.185586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.185596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.185913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.185924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.186269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.186279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.186610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.186620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.186923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.186933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.187223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.187233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.187555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.187566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.187740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.187750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.188007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.188017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.188347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.188357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.188648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.188658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.189045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.189058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.189252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.189265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.189634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.189644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.189975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.189984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.190187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.190197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.190501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.190511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.190734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.190744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.191089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.191104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.191268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.191278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.191614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.191624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.191920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.191930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.192218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.664 [2024-12-06 18:03:34.192228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.664 qpair failed and we were unable to recover it. 00:26:46.664 [2024-12-06 18:03:34.192387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.192398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.192577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.192588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.192886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.192896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.193206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.193216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.193531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.193541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.193889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.193899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.194084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.194094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.194483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.194493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.194659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.194669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.194850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.194860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.195048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.195057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.195350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.195363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.195681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.195691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.195998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.196007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.196179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.196189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Read completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 Write completed with error (sct=0, sc=8) 00:26:46.665 starting I/O failed 00:26:46.665 [2024-12-06 18:03:34.196872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.665 [2024-12-06 18:03:34.197406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.665 [2024-12-06 18:03:34.197484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.665 qpair failed and we were unable to recover it. 00:26:46.665 [2024-12-06 18:03:34.197729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.197757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.198095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.198144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.198593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.198659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.198777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.198803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.199018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.199040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.199387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.199411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.199656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.199678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.199866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.199887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.200212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.200234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.200575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.200597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.200974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.200986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.201183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.201193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.201402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.201411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.201710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.201719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.201903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.201913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.202271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.202284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.202490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.202500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.202830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.202840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.203108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.203118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.203306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.203315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.203602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.203612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.203942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.203952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.204274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.204284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.204668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.204678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.204859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.204868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.205191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.205201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.205515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.205525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.205813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.205822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.206140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.666 [2024-12-06 18:03:34.206151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.666 qpair failed and we were unable to recover it. 00:26:46.666 [2024-12-06 18:03:34.206472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.206481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.206651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.206661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.206969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.206979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.207310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.207320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.207615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.207625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.207979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.208168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.208178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.208349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.208358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.208718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.208727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.208925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.208935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.209111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.209121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.209332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.209341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.209651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.209661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.209958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.209970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.210157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.210167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.210265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.210274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.210475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.210484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.210645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.210655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.210693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.210702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.210913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.210923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.211257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.211267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.667 qpair failed and we were unable to recover it. 00:26:46.667 [2024-12-06 18:03:34.211448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.667 [2024-12-06 18:03:34.211458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.211763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.211773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.212071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.212081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.212287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.212297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.212619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.212630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.212926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.212936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.213248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.213259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.213547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.213558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.213871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.213881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.214186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.214197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.214554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.214564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.214853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.214863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.215024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.215033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.215204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.215215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.215394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.215404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.215699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.215709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.216044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.216054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.216371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.216381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.216543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.216553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.216910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.216922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.217238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.217249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.217600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.217609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.217910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.217919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.218289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.218300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.218636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.218646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.219028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.219038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.219360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.219370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.219720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.219729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.220025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.220035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.668 [2024-12-06 18:03:34.220359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.668 [2024-12-06 18:03:34.220370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.668 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.220671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.220680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.220974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.220984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.221167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.221178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.221517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.221527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.221842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.221852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.222013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.222022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.222329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.222339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.222536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.222546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.222860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.222870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.223054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.223063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.223393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.223403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.223737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.223747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.224051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.224060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.224382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.224393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.224566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.224576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.224852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.224862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.225155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.225166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.225478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.225487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.225639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.225648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.226045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.226055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.226225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.226235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.226597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.226607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.226900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.226910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.227129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.227139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.227475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.227485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.227854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.227864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.228223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.228234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.228407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.228419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.228701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.228711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.229070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.669 [2024-12-06 18:03:34.229079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.669 qpair failed and we were unable to recover it. 00:26:46.669 [2024-12-06 18:03:34.229291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.229301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.229651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.229661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.229855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.229865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.230174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.230185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.230343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.230353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.230401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.230411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.230619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.230628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.230954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.230964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.231256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.231266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.231445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.231784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.231795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.232081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.232091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.232409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.232419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.232619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.232629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.232815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.232825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.233018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.233028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.233335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.233346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.233550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.233560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.233858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.233868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.234195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.234206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.234393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.234402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.234766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.234777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.234952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.234962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.235277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.235287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.235591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.235600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.235953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.235963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.236265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.236275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.236624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.236639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.236984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.670 [2024-12-06 18:03:34.236994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.670 qpair failed and we were unable to recover it. 00:26:46.670 [2024-12-06 18:03:34.237045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.237055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.237409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.237419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.237587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.237597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.237772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.237783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.237946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.237955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.238313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.238323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.238638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.238961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.238971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.239266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.239277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.239570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.239866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.239875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.240166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.240176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.240482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.240492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.240694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.240703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.241018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.241027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.241347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.241358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.241693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.241703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.241857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.241866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.242055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.242065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.242406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.242417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.242790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.242800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.242963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.242972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.243310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.243663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.243673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.243967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.243977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.244200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.244536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.244545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.244875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.244884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.245275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.245286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.671 qpair failed and we were unable to recover it. 00:26:46.671 [2024-12-06 18:03:34.245584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.671 [2024-12-06 18:03:34.245593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.245768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.245778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.246076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.246086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.246386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.246396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.246696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.246706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.246864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.246874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.247076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.247086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.247444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.247454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.247653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.247663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.247996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.248006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.248335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.248661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.248671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.248853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.248863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.249166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.249177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.249486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.249496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.249784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.249793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.249841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.249850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.249935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.249944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.250277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.250288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.250625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.250635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.250946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.250955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.251309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.251319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.251651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.251662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.251957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.251967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.252304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.252315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.252646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.252656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.252984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.252994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.253211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.253221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.253616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.253627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.253966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.672 [2024-12-06 18:03:34.253976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.672 qpair failed and we were unable to recover it. 00:26:46.672 [2024-12-06 18:03:34.254281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.254291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.254464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.254474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.254808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.254819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.255181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.255192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.255496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.255507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.255818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.255829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.256121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.256132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.256458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.256470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.256753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.256764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.257054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.257066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.257237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.257249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.257535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.257810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.257822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.257975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.257986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.258309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.258320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.258490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.258501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.258697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.258896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.258907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.259084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.259095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.259392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.259402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.259684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.673 [2024-12-06 18:03:34.259694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.673 qpair failed and we were unable to recover it. 00:26:46.673 [2024-12-06 18:03:34.259978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.259989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.260201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.260212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.260395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.260405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.260603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.260614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.260949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.260961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.261164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.261175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.261582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.261592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.261846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.261856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.262167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.262179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.262503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.262515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.262823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.262834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.263127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.263138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.263524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.263536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.263711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.263724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.264015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.264027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.264213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.264225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.264524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.264535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.264834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.264845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.265140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.265152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.265471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.265482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.265645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.265657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.265840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.265851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.266156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.266168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.266521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.266532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.266836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.266848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.267157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.267168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.267504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.267515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.267827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.674 [2024-12-06 18:03:34.267838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.674 qpair failed and we were unable to recover it. 00:26:46.674 [2024-12-06 18:03:34.268110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.268122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.268355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.268367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.268564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.268575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.268896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.268907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.269093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.269109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.269442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.269454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.269739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.269750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.270127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.270139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.270487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.270497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.270803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.270814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.271125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.271138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.271316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.271327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.271646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.271660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.271975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.271987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.272293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.272305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.272694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.272706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.272989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.273001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.273315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.273326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.273544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.273556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.273868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.273880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.274173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.274185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.274484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.274496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.274814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.274826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.275111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.275123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.275314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.275326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.275675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.275686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.275873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.275884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.276173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.276185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.276459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.276471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.276760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.276772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.675 [2024-12-06 18:03:34.277058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.675 [2024-12-06 18:03:34.277070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.675 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.277352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.277364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.277671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.277865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.277877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.278208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.278220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.278529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.278542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.278734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.278746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.279080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.279092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.279412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.279423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.279698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.279712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.279895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.279906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.280223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.280235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.280580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.280591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.280774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.280785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.281067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.281078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.281384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.281396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.281715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.281727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.282008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.282021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.282217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.282229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.282578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.282590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.282772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.283107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.283120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.283420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.283431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.283734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.283746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.284031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.284043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.284227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.284238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.284499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.284511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.284706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.284717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.285040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.285052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.676 qpair failed and we were unable to recover it. 00:26:46.676 [2024-12-06 18:03:34.285397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.676 [2024-12-06 18:03:34.285408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.285717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.285728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.286037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.286048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.286351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.286363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.286656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.286668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.286975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.286987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.287266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.287278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.287452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.287464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.287791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.287803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.288118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.288130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.288451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.288463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.288791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.288803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.289108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.289120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.289429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.289441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.289722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.289733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.289896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.289908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.290192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.290204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.290367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.290379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.290603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.290613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.290933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.290944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.291134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.291146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.291490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.291502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.291818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.291830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.292148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.292161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.292491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.292502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.292652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.292662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.292914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.292926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.293223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.293235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.293574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.293586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.293897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.293909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.677 qpair failed and we were unable to recover it. 00:26:46.677 [2024-12-06 18:03:34.294187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.677 [2024-12-06 18:03:34.294199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.294424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.294435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.294661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.294672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.295000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.295012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.295336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.295349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.295630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.295642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.295931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.295943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.296234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.296246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.296559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.296570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.296751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.296761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.297047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.297058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.297375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.297387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.297553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.297566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.297734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.297745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.297927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.297938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.298250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.298559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.298857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.298869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.299168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.299183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.299506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.299517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.299692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.299703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.299998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.300010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.300353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.300365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.300682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.300694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.300874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.300886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.301072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.301084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.301369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.301381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.301570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.301582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.301876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.301888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.678 qpair failed and we were unable to recover it. 00:26:46.678 [2024-12-06 18:03:34.302213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.678 [2024-12-06 18:03:34.302225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.302402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.302413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.302708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.302719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.303036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.303048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.303348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.303360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.303403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.303413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.303724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.303736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.304033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.304045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.304338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.304351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.304656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.304668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.304943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.304955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.305234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.305247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.305530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.305542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.305827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.305839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.306113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.306126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.306448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.306460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.306623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.306637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.306942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.306954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.307235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.307247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.307534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.307546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.307844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.307856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.308026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.308037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.308326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.679 [2024-12-06 18:03:34.308338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.679 qpair failed and we were unable to recover it. 00:26:46.679 [2024-12-06 18:03:34.308534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.308546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.308871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.308882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.309045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.309055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.309302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.309314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.309609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.309620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.309925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.309936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.310281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.310293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.310649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.310660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.310826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.310838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.311026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.311038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.311353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.311365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.311655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.311667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.311973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.311985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.312159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.312171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.312352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.312364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.312688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.312700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.312980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.312992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.313183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.313195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.313475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.313486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.313765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.313776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.314002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.314014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.314324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.314337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.314609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.314621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.314920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.314932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.315090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.315107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.315395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.315407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.315566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.315577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.315936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.315948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.316265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.316276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.680 [2024-12-06 18:03:34.316462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.680 [2024-12-06 18:03:34.316473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.680 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.316830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.316842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.317008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.317019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.317339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.317351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.317526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.317538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.317708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.317720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.318034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.318047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.318351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.318363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.318637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.318649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.318864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.318876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.319197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.319210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.319508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.319520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.319797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.319808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.320087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.320097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.320441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.320452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.320636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.320647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.321013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.321024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.321330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.321343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.321632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.321642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.321931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.321943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.321989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.322000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.322082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.322094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.322421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.322432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.322718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.322729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.322903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.322913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.323094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.323111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.323502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.323515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.323875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.323886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.324177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.324189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.681 [2024-12-06 18:03:34.324523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.681 [2024-12-06 18:03:34.324534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.681 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.324853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.324864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.325046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.325058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.325234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.325249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.325434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.325445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.325692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.325703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.325927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.325938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.326245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.326256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.326554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.326565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.326855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.326866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.327179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.327346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.327357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.327667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.327679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.327968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.327979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.328176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.328187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.328491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.328502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.328788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.328799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.329184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.329196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.329508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.329519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.329811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.329822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.330108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.330120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.330472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.330484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.330826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.331021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.331033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.331348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.331359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.331637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.331648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.331957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.331969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.332272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.332283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.332480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.332490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.682 [2024-12-06 18:03:34.332660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.682 [2024-12-06 18:03:34.332671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.682 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.332969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.332982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.333143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.333154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.333431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.333442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.333740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.333751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.334056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.334067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.334241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.334252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.334550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.334562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.334863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.334874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.335153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.335165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.335487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.335499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.335778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.335789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.336070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.336081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.336378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.336390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.336635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.336646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.336948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.336960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.337246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.337259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.337422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.337434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.337736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.338069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.338080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.338473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.338485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.338803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.338815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.338996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.339007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.339185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.339196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.339526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.339537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.339685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.339696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.340001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.340013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.340292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.340304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.340599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.340612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.683 qpair failed and we were unable to recover it. 00:26:46.683 [2024-12-06 18:03:34.340954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.683 [2024-12-06 18:03:34.340965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.341289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.341301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.341586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.341597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.341823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.341835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.342145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.342156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.342437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.342448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.342724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.342735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.342921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.342933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.343096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.343112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.343429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.343440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.343624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.343636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.343954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.343965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.344129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.344141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.344433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.344443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.344752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.344967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.344978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.345274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.345285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.345600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.345612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.345815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.345827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.346145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.346156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.346325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.346672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.346684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.346992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.347004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.347291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.347302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.347577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.347589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.347872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.347884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.348199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.348210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.348537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.348548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.348853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.684 [2024-12-06 18:03:34.348864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.684 qpair failed and we were unable to recover it. 00:26:46.684 [2024-12-06 18:03:34.349156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.349168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.349481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.349492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.349669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.349680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.349977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.349988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.350175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.350188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.350495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.350506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.350804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.350815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.350979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.350991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.351177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.351188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.351501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.351512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.351712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.351723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.352005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.352018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.352311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.352323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.352508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.352519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.352682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.352695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.353018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.353029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.353332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.353344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.353698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.353709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.354017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.354028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.354336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.354348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.354691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.354702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.354874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.354886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.355197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.355208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.355488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.355499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.685 qpair failed and we were unable to recover it. 00:26:46.685 [2024-12-06 18:03:34.355855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.685 [2024-12-06 18:03:34.355866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.356115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.356126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.356484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.356495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.356671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.356683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.356993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.357004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.357340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.357625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.357635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.357914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.357925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.358090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.358104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.358409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.358420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.358697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.358708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.358887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.358898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.359087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.359099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.359424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.359434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.359620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.359633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.359889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.359900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.360216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.360227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.360532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.360543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.360881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.360892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.361060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.361071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.361234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.361245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.361574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.361586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.361876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.361887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.362170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.362181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.362357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.362369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.362648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.362659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.362938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.362949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.363140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.363152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.363474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.363485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.363669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.363680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.686 [2024-12-06 18:03:34.363996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.686 [2024-12-06 18:03:34.364006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.686 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.364320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.364331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.364628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.364639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.364828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.364839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.365029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.365040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.365367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.365379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.365663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.365674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.366006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.366017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.366311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.366323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.366646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.366927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.366938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.367105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.367119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.367297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.367307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.367625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.367636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.367808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.367819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.368092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.368111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.368431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.368442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.368629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.368640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.368976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.368987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.369334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.369346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.369500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.369511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.369812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.369823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.369984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.369995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.370309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.370320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.370479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.370490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.370806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.370817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.371012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.371023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.371347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.371359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.371660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.371670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.687 qpair failed and we were unable to recover it. 00:26:46.687 [2024-12-06 18:03:34.371961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.687 [2024-12-06 18:03:34.371972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.372134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.372145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.372431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.372441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.372791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.372802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.373120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.373488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.373499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.373809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.373820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.373979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.373990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.374165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.374175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.374516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.374528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.374877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.374888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.375215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.375226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.375559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.375570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.375874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.375885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.376160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.376171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.376359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.376371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.376672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.376961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.376973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.377135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.377146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.377449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.377460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.377745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.377756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.378070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.378080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.378266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.378279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.378459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.378470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.378779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.378791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.379086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.379097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.379381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.379392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.379543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.379554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.379727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.688 [2024-12-06 18:03:34.379738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.688 qpair failed and we were unable to recover it. 00:26:46.688 [2024-12-06 18:03:34.380061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.380072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.380247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.380259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.380426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.380437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.380728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.380739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.381052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.381063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.381378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.381390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.381673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.381684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.382043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.382054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.382222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.382234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.382575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.382586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.382860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.382871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.383157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.383168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.383486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.383497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.383669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.383680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.383975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.383986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.384152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.384164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.384362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.384373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.384676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.384687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.384994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.385005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.385203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.385396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.385407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.385737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.385750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.386054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.386065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.386251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.386263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.386444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.386455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.386614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.386625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.386953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.386964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.387269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.387280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.387590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.387600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.387785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.387796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.388113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.689 [2024-12-06 18:03:34.388125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.689 qpair failed and we were unable to recover it. 00:26:46.689 [2024-12-06 18:03:34.388449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.388460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.388747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.388758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.389038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.389050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.389355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.389367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.389639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.389651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.389947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.389959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.390281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.390293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.390571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.390582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.390744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.390754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.390898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.390909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.391316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.391327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.391504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.391515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.391834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.391845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.392131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.392142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.392190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.392199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.392462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.392473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.392670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.392681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.393047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.393061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.393364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.393376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.393545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.393556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.393855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.393867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.394177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.394188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.394377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.394388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.394582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.394593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.394861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.394872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.395183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.395194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.395479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.395490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.395816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.395826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.396113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.690 [2024-12-06 18:03:34.396125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.690 qpair failed and we were unable to recover it. 00:26:46.690 [2024-12-06 18:03:34.396443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.396455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.396761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.396772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.396906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.396917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.397230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.397242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.397617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.397628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.397926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.397937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.398110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.398121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.398286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.398297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.398547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.398558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.398867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.398878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.399166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.399177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.399502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.399513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.399794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.399806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.399964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.399976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.400271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.400283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.400576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.400589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.400756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.400768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.400959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.400970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.401307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.401319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.401610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.401621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.401926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.401938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.402278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.402289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.402574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.402584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.691 [2024-12-06 18:03:34.402747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.691 [2024-12-06 18:03:34.402758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.691 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.403028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.403040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.403207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.403219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.403533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.403543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.403581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.403590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.403741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.403751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.404062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.404073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.404244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.404256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.404425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.404435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.404763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.404774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.405056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.405068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.405389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.405400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.405583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.405597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.405912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.405923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.406190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.406201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.406504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.406516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.406795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.406806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.407094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.407117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.407289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.407300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.407612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.407622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.407934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.407946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.408224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.408235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.408565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.408576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.408757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.408768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.408935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.408946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.409266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.409278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.409570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.409582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.409899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.409910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.410073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.410084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.410415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.692 [2024-12-06 18:03:34.410426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.692 qpair failed and we were unable to recover it. 00:26:46.692 [2024-12-06 18:03:34.410751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.410762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.411111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.411122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.411449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.411460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.411795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.411806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.412115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.412126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.412487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.412498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.412544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.412553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.412828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.412839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.412990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.413003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.413448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.413538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.413963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.414000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.414447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.414498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.414777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.414797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.415000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.415016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.415347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.415364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.415530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.415547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.415921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.415937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.416276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.416293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.416491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.416507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.416833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.416848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.417111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.417128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.417488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.417504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.417675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.417690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.418031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.418047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.418233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.418250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.418503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.418523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.418857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.418873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.419128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.419138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.693 qpair failed and we were unable to recover it. 00:26:46.693 [2024-12-06 18:03:34.419489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.693 [2024-12-06 18:03:34.419497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.419824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.419831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.420002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.420011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.420194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.420532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.420540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.420856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.421169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.421178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.421487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.421495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.421795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.421804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.421972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.421981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.422314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.422323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.422602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.422610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.422774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.422782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.422945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.422953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.423140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.423149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.423436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.423446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.423598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.423607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.423921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.423929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.424240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.424248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.424636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.424645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.424827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.424835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.425035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.425043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.425357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.425366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.425677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.425983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.425991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.426323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.426332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.426632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.426640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.426849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.426857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.694 [2024-12-06 18:03:34.427180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.694 [2024-12-06 18:03:34.427191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.694 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.427574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.427584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.427908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.427917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.428077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.428085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.428124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.428132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.428287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.428295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.428643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.428652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.428970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.428978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.429152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.429161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.429196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.429204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.429536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.429544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.429713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.429721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.430057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.430066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.430445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.430453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.430813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.430822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.430863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.431188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.431196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.431478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.431486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.431793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.431803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.432132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.432141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.432468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.432476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.432708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.432716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.432884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.432892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.433202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.433210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.433557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.433566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.433743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.433752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.434034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.434042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.434317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.434327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.434679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.434688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.434967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.434975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.435013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.695 [2024-12-06 18:03:34.435019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.695 qpair failed and we were unable to recover it. 00:26:46.695 [2024-12-06 18:03:34.435273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.435281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.435605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.435613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.435907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.435914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.436085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.436093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.436283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.436290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.436573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.436581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.436727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.436734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.437025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.437034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.437262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.437270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.437548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.437557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.437856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.437865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.438024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.438031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.438395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.438404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.438446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.438452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.438779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.438787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.438972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.438980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.439299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.439308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.439628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.439636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.439979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.439988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.440278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.440287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.440580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.440589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.440774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.440783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.441066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.441074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.441354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.441362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.441659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.441667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.441854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.441862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.442183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.442191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.442479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.442487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.442653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.442662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.696 qpair failed and we were unable to recover it. 00:26:46.696 [2024-12-06 18:03:34.442835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.696 [2024-12-06 18:03:34.442845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.443161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.443169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.443480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.443489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.443682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.443691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.443995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.444003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.444167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.444175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.444521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.444529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.444704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.444713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.444881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.444890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.445207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.445215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.445522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.445531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.445846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.445855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.446156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.446164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.446485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.446494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.446803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.446811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.447158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.447167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.447523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.447532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.447835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.447843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.448141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.448149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.448187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.448196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.448526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.448534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.448718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.448726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.449066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.449074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.449235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.449244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.449436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.449444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.697 [2024-12-06 18:03:34.449808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.697 [2024-12-06 18:03:34.449817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.697 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.450119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.450128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.450437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.450446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.450621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.450802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.450810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.450989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.450997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.451155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.451164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.451455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.451463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.451810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.451819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.451982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.451990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.452159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.452167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.452368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.452376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.452544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.452553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.452752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.452760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.453053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.453062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.453417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.453426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.453733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.453743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.453907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.453916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.454281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.454290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.454581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.454589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.454898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.454907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.455218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.455226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.455511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.455521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.455809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.455817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.456116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.456126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.456203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.456211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.456497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.456505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.456855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.456863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.457156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.698 [2024-12-06 18:03:34.457164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.698 qpair failed and we were unable to recover it. 00:26:46.698 [2024-12-06 18:03:34.457345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.457354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.457564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.457572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.457893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.457902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.458206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.458215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.458583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.458591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.458758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.458766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.458803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.458810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.458973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.458982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.459148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.459157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.459557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.459565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.459864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.459873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.460249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.460257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.460553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.460561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.460765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.460773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.461085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.461093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.461399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.461408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.461703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.461711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.461877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.461885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.462245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.462253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.462522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.462531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.462824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.462833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.463042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.463051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.463378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.463388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.463599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.463608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.463778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.463787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.463978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.463987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.464357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.464366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.464520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.464528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.699 qpair failed and we were unable to recover it. 00:26:46.699 [2024-12-06 18:03:34.464830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.699 [2024-12-06 18:03:34.464838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.464994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.465002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.465211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.465221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.465491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.465499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.465785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.465794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.465986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.465996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.466254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.466263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.466548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.466556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.466856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.466864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.467144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.467152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.467424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.467432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.467595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.467603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.467931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.467939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.468224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.468234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.468524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.468533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.468827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.468835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.469131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.469139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.469451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.469459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.469740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.469748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.470028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.470038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.470346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.470355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.470585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.470593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.470921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.470929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.471226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.471235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.471525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.471534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.471696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.471703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.700 [2024-12-06 18:03:34.471937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.700 [2024-12-06 18:03:34.471945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.700 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.472308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.472318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.472665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.472675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.472983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.472992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.473269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.473278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.473568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.473577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.473784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.473794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.474107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.474116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.474446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.474454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.474739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.474748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.475046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.475054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.475353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.475362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.475714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.475722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.476006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.476016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.476324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.476332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.476613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.476621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.476910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.476919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.477227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.477235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.477520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.477528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.477784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.477795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.478090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.478298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.478307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.478488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.478496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.478656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.478663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.478983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.478991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.479171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.479179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.479369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.479377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.479657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.479947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.971 [2024-12-06 18:03:34.479955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.971 qpair failed and we were unable to recover it. 00:26:46.971 [2024-12-06 18:03:34.480283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.480291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.480589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.480597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.480882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.480890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.481181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.481190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.481407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.481415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.481723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.481734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.482081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.482375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.482383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.482552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.482560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.482887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.482897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.483203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.483211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.483553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.483561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.483848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.483856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.484154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.484162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.484477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.484486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.484786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.484795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.485095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.485107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.485391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.485400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.485634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.485643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.485810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.485820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.486127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.486135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.486484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.486494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.486778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.486787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.486932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.486940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.487230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.487241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.487548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.487557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.487720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.487728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.487901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.487909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.488330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.972 [2024-12-06 18:03:34.488339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.972 qpair failed and we were unable to recover it. 00:26:46.972 [2024-12-06 18:03:34.488660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.488669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.488852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.488862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.489027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.489037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.489357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.489365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.489661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.489670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.489973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.489982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.490257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.490266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.490446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.490454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.490633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.490641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.490942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.490950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.490987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.490994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.491308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.491600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.491608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.491767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.491944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.491953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.492282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.492291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.492584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.492593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.492881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.492889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.493189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.493199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.493496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.493504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.493539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.493545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.493867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.493875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.494162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.494171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.494354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.494362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.494670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.494679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.494847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.494856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.495186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.495194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.495499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.973 [2024-12-06 18:03:34.495805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.973 [2024-12-06 18:03:34.495813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.973 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.495968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.495977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.496284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.496292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.496605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.496614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.496899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.496907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.497213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.497222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.497552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.497560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.497738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.497746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.498090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.498099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.498396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.498406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.498687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.498697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.498992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.499000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.499287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.499295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.499624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.499635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.499676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.499684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.499983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.500208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.500216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.500536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.500545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.500722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.500730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.500927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.500935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.501151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.501444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.501452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.501792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.501800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.502081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.502090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.502469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.502478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.502757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.502766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.503065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.503073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.503378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.503667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.974 [2024-12-06 18:03:34.503676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.974 qpair failed and we were unable to recover it. 00:26:46.974 [2024-12-06 18:03:34.503969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.503978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.504268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.504276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.504318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.504324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.504645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.504653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.504831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.504839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.505002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.505009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.505315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.505325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.505624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.505632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.505921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.505930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.506215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.506224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.506524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.506533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.506820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.506828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.507131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.507140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.507436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.507444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.507785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.507793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.508086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.508094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.508420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.508429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.508713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.508722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.509022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.509030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.509333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.509341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.509624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.509633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.509946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.509956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.510152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.510160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.510341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.510349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.510497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.510507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.510788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.510796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.510828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.510835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.511154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.511163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.511469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.511477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.975 qpair failed and we were unable to recover it. 00:26:46.975 [2024-12-06 18:03:34.511641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.975 [2024-12-06 18:03:34.511650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.511986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.511995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.512295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.512305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.512614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.512622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.512909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.512918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.513212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.513220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.513518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.513528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.513682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.513692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.514009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.514018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.514313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.514322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.514357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.514365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.514551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.514560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.514866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.514874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.515269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.515278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.515562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.515570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.515877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.515885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.516045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.516053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.516365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.516375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.516657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.516665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.516821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.516830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.516985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.516995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.517158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.517168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.517329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.517338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.517515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.517523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.517814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.517822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.976 [2024-12-06 18:03:34.518021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.976 [2024-12-06 18:03:34.518029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.976 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.518384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.518392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.518698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.518707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.518993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.519001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.519316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.519324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.519632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.519640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.519930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.519938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.520090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.520098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.520306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.520314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.520635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.520644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.520927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.520936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.521125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.521135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.521414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.521422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.521592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.521602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.521929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.521938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.522240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.522249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.522538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.522546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.522826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.522834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.523124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.523133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.523449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.523459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.523758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.523767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.524049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.524058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.524355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.524364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.524542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.524551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.524724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.524732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.524911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.524920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.525205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.525214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.977 qpair failed and we were unable to recover it. 00:26:46.977 [2024-12-06 18:03:34.525386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.977 [2024-12-06 18:03:34.525394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.525703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.525711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.526008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.526016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.526181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.526190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.526420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.526428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.526464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.526472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.526697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.526707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.526887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.526895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.527197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.527206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.527394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.527402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.527714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.527726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.528031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.528040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.528228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.528237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.528402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.528410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.528604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.528614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.528917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.528927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.529214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.529223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.529522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.529532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.529848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.530234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.530243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.530420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.530429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.530595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.530604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.530924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.530933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.531231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.531244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.531428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.531437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.531740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.531750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.531906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.531915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.532227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.532552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.978 [2024-12-06 18:03:34.532560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.978 qpair failed and we were unable to recover it. 00:26:46.978 [2024-12-06 18:03:34.532867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.532877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.533194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.533204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.533500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.533509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.533781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.533791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.533978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.533987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.534297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.534307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.534494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.534503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.534812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.534822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.535169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.535178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.535496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.535504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.535813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.535822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.536127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.536137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.536458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.536467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.536633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.536642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.536822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.536830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.537002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.537011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.537365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.537373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.537687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.537695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.537904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.537912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.538218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.538227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.538567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.538576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.538883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.538893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.539181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.539190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.539398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.539407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.539716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.540031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.540352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.540361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.540644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.540663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.541014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.979 [2024-12-06 18:03:34.541024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.979 qpair failed and we were unable to recover it. 00:26:46.979 [2024-12-06 18:03:34.541216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.541225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.541509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.541517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.541805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.541815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.541854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.541863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.542160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.542168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.542396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.542405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.542756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.542766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.543064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.543073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.543267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.543277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.543565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.543574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.543755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.543763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.544032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.544041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.544357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.544642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.544650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.544825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.544834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.545019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.545027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.545313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.545323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.545539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.545547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.545845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.545853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.546138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.546147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.546475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.546484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.546839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.546848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.547201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.547211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.547503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.547511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.547793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.547801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.548104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.548114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.548425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.548435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.548622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.548631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.548915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.548924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.980 qpair failed and we were unable to recover it. 00:26:46.980 [2024-12-06 18:03:34.549222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.980 [2024-12-06 18:03:34.549231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.549550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.549559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.549864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.550160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.550170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.550490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.550759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.550768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.551045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.551054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.551382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.551392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.551699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.551707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.551986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.551995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.552280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.552289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.552596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.552606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.552893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.552902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.553140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.553148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.553329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.553339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.553520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.553530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.553834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.553842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.554053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.554062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.554383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.554391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.554694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.554702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.554871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.554880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.555203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.555212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.555561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.555570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.555730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.555739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.556053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.556063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.556389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.556399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.556694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.556703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.556909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.556919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.557137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.557146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.557316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.557326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.981 [2024-12-06 18:03:34.557543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.981 qpair failed and we were unable to recover it. 00:26:46.981 [2024-12-06 18:03:34.557836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.557846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.558151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.558159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.558466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.558475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.558786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.558796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.559092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.559103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.559412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.559422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.559717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.559726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.559902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.559910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.560114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.560125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.560424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.560433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.560731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.560741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.560943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.560952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.561262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.561275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.561552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.561561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.561849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.561857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.562029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.562038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.562223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.562232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.562530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.562539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.562844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.562853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.563014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.563022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.563339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.563348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.563472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.563482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.563675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.563685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.563992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.564000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.982 qpair failed and we were unable to recover it. 00:26:46.982 [2024-12-06 18:03:34.564319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.982 [2024-12-06 18:03:34.564329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.564517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.564526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.564804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.564815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.565122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.565133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.565400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.565409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.565582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.565590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.565919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.565928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.566284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.566294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.566632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.566641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.566998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.567006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.567323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.567331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.567662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.567671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.567966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.567976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.568302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.568311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.568478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.568486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.568669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.568678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.568974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.568983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.569282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.569291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.569600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.569609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.569901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.569911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.570200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.570208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.570410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.570419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.570573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.570581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.570747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.570756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.570913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.570920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.571204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.571212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.571396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.571405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.571777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.571785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.572116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.983 [2024-12-06 18:03:34.572127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.983 qpair failed and we were unable to recover it. 00:26:46.983 [2024-12-06 18:03:34.572400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.572408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.572673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.572681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.572997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.573006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.573398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.573407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.573754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.573763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.574047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.574055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.574263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.574272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.574560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.574568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.574751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.574759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.574916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.574925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.575270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.575279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.575583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.575591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.575763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.576089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.576098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.576388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.576397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.576714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.576723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.577028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.577037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.577203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.577213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.577503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.577512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.577566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.577573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.577738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.577747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.578040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.578049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.578394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.578404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.578738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.578747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.579097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.579108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.579151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.579159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.579449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.579457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.579499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.579506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.579795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.579804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.984 qpair failed and we were unable to recover it. 00:26:46.984 [2024-12-06 18:03:34.579976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.984 [2024-12-06 18:03:34.579984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.580164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.580172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.580556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.580565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.580883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.580892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.581199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.581207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.581403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.581411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.581455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.581461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.581741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.581749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.582094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.582105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.582428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.582437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.582676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.582686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.582861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.582868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.583054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.583062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.583283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.583291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.583608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.583617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.583910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.583918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.584073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.584081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.584280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.584288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.584584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.584592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.584794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.584803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.585090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.585098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.585434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.585443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.585597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.585605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.585915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.585925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.586110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.586120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.586426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.586434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.586742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.586750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.587043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.587051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.587089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.587095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 [2024-12-06 18:03:34.587272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.587280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.985 [2024-12-06 18:03:34.587559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.985 [2024-12-06 18:03:34.587568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.985 qpair failed and we were unable to recover it. 00:26:46.985 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:46.985 [2024-12-06 18:03:34.587871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.587880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.587912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.587919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.986 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.986 [2024-12-06 18:03:34.588233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.588242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.986 [2024-12-06 18:03:34.588513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.588521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.588842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.588852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.589016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.589025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.589314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.589322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.589485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.589494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.589655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.589664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.589829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.589836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.590120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.590129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.590402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.590411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.590710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.590728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.591017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.591026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.591394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.591403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.591703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.591713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.592024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.592033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.592339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.592351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.592699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.592707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.592866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.592874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.593054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.593063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.593352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.593360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.593519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.593527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.593802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.593811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.593989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.593998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.594281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.594291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.594670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.594679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.594986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.986 [2024-12-06 18:03:34.594995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.986 qpair failed and we were unable to recover it. 00:26:46.986 [2024-12-06 18:03:34.595271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.595280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.595642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.595651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.595935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.596233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.596242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.596545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.596553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.596845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.596854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.597132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.597140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.597435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.597444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.597604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.597613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.597915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.597926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.598070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.598080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.598243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.598251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.598533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.598543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.598719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.598727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.599044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.599054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.599359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.599368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.599680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.599689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.599965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.599973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.600136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.600145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.600462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.600471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.600655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.600664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.600991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.601000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.601354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.601363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.601647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.601657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.601938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.601947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.602235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.602244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.602415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.602423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.602668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.602677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.602845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.602852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.603199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.603210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.603400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.603408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.987 qpair failed and we were unable to recover it. 00:26:46.987 [2024-12-06 18:03:34.603565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.987 [2024-12-06 18:03:34.603574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.603789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.603797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.604110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.604120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.604448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.604456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.604495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.604501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.604770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.604779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.604960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.604969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.605265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.605274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.605557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.605566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.605880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.605888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.606209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.606220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.606528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.606536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.606677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.607000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.607009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.607347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.607356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.607649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.607658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.607810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.607818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.608138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.608147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.608435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.608443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.608786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.608795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.609109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.609119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.609156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.609164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.609464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.988 [2024-12-06 18:03:34.609473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.988 qpair failed and we were unable to recover it. 00:26:46.988 [2024-12-06 18:03:34.609750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.609758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.610051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.610060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.610434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.610443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.610730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.610738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.611015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.611024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.611175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.611184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.989 [2024-12-06 18:03:34.611538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.611548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:46.989 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.989 [2024-12-06 18:03:34.611854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.611864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.989 [2024-12-06 18:03:34.612147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.612158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.612489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.612499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.612822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.612832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.613153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.613163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.613465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.613474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.613774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.613785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.613951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.613960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.614230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.614239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.614606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.614614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.614784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.614792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.615120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.615129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.615313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.615321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.615623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.615631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.615937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.615944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.616122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.616130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.616435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.616443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.616601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.616610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.616920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.616928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.617096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.617112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.989 qpair failed and we were unable to recover it. 00:26:46.989 [2024-12-06 18:03:34.617421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.989 [2024-12-06 18:03:34.617429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.617746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.617754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.618076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.618084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.618383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.618393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.618434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.618442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.618612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.618621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.618891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.618900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.619228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.619425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.619433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.619729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.619738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.620075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.620084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.620238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.620533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.620541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.620711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.620718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.621038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.621047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.621379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.621388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.621699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.621707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.622023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.622032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.622363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.622372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.622695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.622704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.622873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.622882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.623050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.623059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.623235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.623244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.623589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.623598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.623981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.623990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.624152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.624160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.624491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.624501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.624842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.624850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.625166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.990 [2024-12-06 18:03:34.625174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.990 qpair failed and we were unable to recover it. 00:26:46.990 [2024-12-06 18:03:34.625395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.625403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.625687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.625695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.625870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.625878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.626111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.626119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.626454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.626462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.626808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.626816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.627106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.627115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.627421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.627429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.627607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.627614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.627768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.627777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.628094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.628105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.628463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.628472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.628762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.629046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.629054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.629434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.629442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.629612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.629620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.629966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.629974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.630271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.630280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.630315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.630321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.630589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.630597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.630911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.630919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.631221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.631229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.631267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.631274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.631595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.631603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.631818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.631827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.631984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.631993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.632321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.632329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.632367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.632374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.632683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.632691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.632866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.991 [2024-12-06 18:03:34.632875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.991 qpair failed and we were unable to recover it. 00:26:46.991 [2024-12-06 18:03:34.633039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.633048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.633373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.633382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.633592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.633601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.633933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.633942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.634239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.634247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.634560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.634577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.634872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.634880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.635178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.635188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.635511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.635520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.635824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.635832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.635985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.635995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.636155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.636163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.636449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.636457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.636767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.636775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.637125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.637135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.637443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.637451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.637769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.637778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 Malloc0 00:26:46.992 [2024-12-06 18:03:34.637977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.637989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.638158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.638165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.638476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.638486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.992 [2024-12-06 18:03:34.638796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.638807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.638991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.638999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:46.992 [2024-12-06 18:03:34.639187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.639196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.992 [2024-12-06 18:03:34.639515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.639523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.992 [2024-12-06 18:03:34.639732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.639740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.639918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.639926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.640203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.640390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.992 [2024-12-06 18:03:34.640398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.992 qpair failed and we were unable to recover it. 00:26:46.992 [2024-12-06 18:03:34.640709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.640717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.641009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.641017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.641339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.641348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.641455] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.993 [2024-12-06 18:03:34.641644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.641653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.641958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.641966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.642265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.642274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.642647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.642655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.642963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.642972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.643315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.643324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.643486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.643494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.643826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.643834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.644201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.644209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.644548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.644556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.644848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.644856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.645144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.645153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.645453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.645461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.645665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.645673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.645848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.645858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.646181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.646189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.646516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.646525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.646693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.646702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.647014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.647022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.647326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.647334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.647660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.647668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.647969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.647976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.648269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.648277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.648578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.648586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.648868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.648876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.649047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.993 [2024-12-06 18:03:34.649055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.993 qpair failed and we were unable to recover it. 00:26:46.993 [2024-12-06 18:03:34.649381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.649390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.649703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.649711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.994 [2024-12-06 18:03:34.649877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.649886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:46.994 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.994 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.994 [2024-12-06 18:03:34.650248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.650258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.650524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.650533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.650823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.650831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.651022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.651030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.651340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.651349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.651503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.651512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.651856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.651865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.652008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.652017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.652217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.652225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.652510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.652518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.652718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.652727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.653007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.653017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.653315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.653325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.653611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.653620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.653918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.653926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.654214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.654222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.654393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.654400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.654719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.654727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.994 [2024-12-06 18:03:34.655027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.994 [2024-12-06 18:03:34.655035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.994 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.655322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.655331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.655486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.655494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.655801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.655809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.656099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.656109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.656404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.656414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.656731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.656739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.657030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.657039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.657279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.657288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.657617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.657625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.657953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.657961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.657999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.658007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.995 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.995 [2024-12-06 18:03:34.658187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.658195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.995 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.995 [2024-12-06 18:03:34.658458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.658466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.658767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.658775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.658945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.658954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.659137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.659146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.659184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.659191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.659474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.659482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.659768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.659776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.659986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.659994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.660268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.660276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.660465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.660472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.660774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.660782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.661068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.661076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.661408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.661416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.661759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.661768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.662046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.662056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.995 [2024-12-06 18:03:34.662363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.995 [2024-12-06 18:03:34.662371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.995 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.662547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.662555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.662727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.662737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.663053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.663062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.663368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.663377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.663688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.663697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.663883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.663892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.664048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.664057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.664245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.664255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.664373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.664382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.664920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.665004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132d490 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Read completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 Write completed with error (sct=0, sc=8) 00:26:46.996 starting I/O failed 00:26:46.996 [2024-12-06 18:03:34.665383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:46.996 [2024-12-06 18:03:34.665769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.665802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff240000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.996 [2024-12-06 18:03:34.666118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.666128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.996 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.996 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.996 [2024-12-06 18:03:34.666387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.666395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.666702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.666719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.667030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.667038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.996 [2024-12-06 18:03:34.667353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.996 [2024-12-06 18:03:34.667362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.996 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.667684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.667693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.667953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.667961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.668294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.668303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.668461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.668469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.668774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.668782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.669136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.669146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.669446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.997 [2024-12-06 18:03:34.669454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff23c000b90 with addr=10.0.0.2, port=4420 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.669715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.997 [2024-12-06 18:03:34.672088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.672148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.672161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.672167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.672172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.672185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.997 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:46.997 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.997 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.997 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.997 [2024-12-06 18:03:34.682097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.682156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.682166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.682172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.682177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.682187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 18:03:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3216046 00:26:46.997 [2024-12-06 18:03:34.691997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.692054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.692065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.692070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.692075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.692085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.702128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.702196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.702206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.702211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.702216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.702226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.712064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.712115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.712125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.712130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.712135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.712146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.722109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.722163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.722173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.722178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.722182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.722193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.997 qpair failed and we were unable to recover it. 00:26:46.997 [2024-12-06 18:03:34.732201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.997 [2024-12-06 18:03:34.732252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.997 [2024-12-06 18:03:34.732264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.997 [2024-12-06 18:03:34.732270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.997 [2024-12-06 18:03:34.732275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.997 [2024-12-06 18:03:34.732285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.998 qpair failed and we were unable to recover it. 00:26:46.998 [2024-12-06 18:03:34.742178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.998 [2024-12-06 18:03:34.742229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.998 [2024-12-06 18:03:34.742239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.998 [2024-12-06 18:03:34.742245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.998 [2024-12-06 18:03:34.742249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.998 [2024-12-06 18:03:34.742259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.998 qpair failed and we were unable to recover it. 00:26:46.998 [2024-12-06 18:03:34.752177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.998 [2024-12-06 18:03:34.752225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.998 [2024-12-06 18:03:34.752235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.998 [2024-12-06 18:03:34.752241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.998 [2024-12-06 18:03:34.752245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.998 [2024-12-06 18:03:34.752256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.998 qpair failed and we were unable to recover it. 00:26:46.998 [2024-12-06 18:03:34.762245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.998 [2024-12-06 18:03:34.762290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.998 [2024-12-06 18:03:34.762300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.998 [2024-12-06 18:03:34.762305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.998 [2024-12-06 18:03:34.762310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.998 [2024-12-06 18:03:34.762320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.998 qpair failed and we were unable to recover it. 00:26:46.998 [2024-12-06 18:03:34.772250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.998 [2024-12-06 18:03:34.772302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.998 [2024-12-06 18:03:34.772311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.998 [2024-12-06 18:03:34.772317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.998 [2024-12-06 18:03:34.772321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.998 [2024-12-06 18:03:34.772336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.998 qpair failed and we were unable to recover it. 00:26:46.998 [2024-12-06 18:03:34.782276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:46.998 [2024-12-06 18:03:34.782325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:46.998 [2024-12-06 18:03:34.782336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:46.998 [2024-12-06 18:03:34.782342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:46.998 [2024-12-06 18:03:34.782348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:46.998 [2024-12-06 18:03:34.782358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:46.998 qpair failed and we were unable to recover it. 00:26:47.260 [2024-12-06 18:03:34.792151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.260 [2024-12-06 18:03:34.792196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.260 [2024-12-06 18:03:34.792207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.260 [2024-12-06 18:03:34.792212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.260 [2024-12-06 18:03:34.792217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.260 [2024-12-06 18:03:34.792229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-12-06 18:03:34.802386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.260 [2024-12-06 18:03:34.802432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.260 [2024-12-06 18:03:34.802442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.260 [2024-12-06 18:03:34.802447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.260 [2024-12-06 18:03:34.802452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.260 [2024-12-06 18:03:34.802463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-12-06 18:03:34.812345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.260 [2024-12-06 18:03:34.812396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.260 [2024-12-06 18:03:34.812405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.260 [2024-12-06 18:03:34.812411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.260 [2024-12-06 18:03:34.812415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.260 [2024-12-06 18:03:34.812426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-12-06 18:03:34.822404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.260 [2024-12-06 18:03:34.822457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.260 [2024-12-06 18:03:34.822466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.260 [2024-12-06 18:03:34.822472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.260 [2024-12-06 18:03:34.822476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.260 [2024-12-06 18:03:34.822487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-12-06 18:03:34.832395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.260 [2024-12-06 18:03:34.832438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.260 [2024-12-06 18:03:34.832448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.260 [2024-12-06 18:03:34.832453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.260 [2024-12-06 18:03:34.832458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.260 [2024-12-06 18:03:34.832468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.260 qpair failed and we were unable to recover it. 00:26:47.260 [2024-12-06 18:03:34.842302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.260 [2024-12-06 18:03:34.842349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.260 [2024-12-06 18:03:34.842359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.842364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.842368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.842379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.852456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.852500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.852511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.852516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.852521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.852531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.862506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.862556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.862568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.862574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.862578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.862588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.872542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.872588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.872598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.872603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.872607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.872617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.882537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.882588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.882598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.882603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.882608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.882618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.892452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.892499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.892509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.892514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.892519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.892529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.902606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.902656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.902666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.902671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.902678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.902688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.912595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.912668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.912678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.912684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.912689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.912699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.922615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.922694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.922704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.922710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.922715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.922725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.932679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.932724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.932734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.932740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.932745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.932755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.942777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.942841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.942850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.942856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.942861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.942871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.952763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.952807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.952817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.952822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.952827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.952837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.962622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.962663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.261 [2024-12-06 18:03:34.962673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.261 [2024-12-06 18:03:34.962678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.261 [2024-12-06 18:03:34.962682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.261 [2024-12-06 18:03:34.962692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.261 qpair failed and we were unable to recover it. 00:26:47.261 [2024-12-06 18:03:34.972846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.261 [2024-12-06 18:03:34.972893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:34.972903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:34.972908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:34.972913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:34.972923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:34.982694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:34.982748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:34.982760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:34.982765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:34.982770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:34.982780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:34.992814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:34.992891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:34.992903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:34.992909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:34.992913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:34.992923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.002839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.002883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.002893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.002898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.002903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.002913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.012904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.012958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.012977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.012984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.012989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.013003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.022820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.022911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.022930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.022937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.022943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.022957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.032921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.032970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.032989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.032998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.033004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.033018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.042937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.042994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.043005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.043011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.043016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.043027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.052874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.052922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.052932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.052938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.052943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.052954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.063042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.063103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.063113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.063118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.063123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.063134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.073052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.073097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.073109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.073115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.073119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.073129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.262 [2024-12-06 18:03:35.082910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.262 [2024-12-06 18:03:35.082956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.262 [2024-12-06 18:03:35.082966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.262 [2024-12-06 18:03:35.082972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.262 [2024-12-06 18:03:35.082977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.262 [2024-12-06 18:03:35.082987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.262 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.093120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.093162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.093172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.093177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.093182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.093192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.103162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.103210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.103220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.103225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.103230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.103240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.113158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.113206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.113216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.113221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.113226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.113236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.123154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.123199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.123210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.123215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.123220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.123230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.133203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.133247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.133256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.133262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.133267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.133277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.143142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.143192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.143203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.143209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.143213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.143224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.153273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.153368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.153378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.153383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.153388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.153399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.163267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.163312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.163322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.163330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.163335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.163345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.173283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.173323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.173333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.173338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.173343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.173354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.183447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.183504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.183514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.183519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.183524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.183534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-12-06 18:03:35.193390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.522 [2024-12-06 18:03:35.193472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.522 [2024-12-06 18:03:35.193482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.522 [2024-12-06 18:03:35.193487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.522 [2024-12-06 18:03:35.193491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.522 [2024-12-06 18:03:35.193502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.203381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.203427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.203436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.203441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.203446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.203459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.213413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.213456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.213466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.213471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.213476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.213486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.223496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.223543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.223553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.223558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.223563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.223573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.233496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.233541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.233551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.233556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.233561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.233571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.243510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.243589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.243599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.243605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.243609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.243620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.253517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.253564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.253574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.253579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.253584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.253594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.263442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.263506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.263515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.263521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.263525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.263535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.273563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.273620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.273630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.273635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.273640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.273650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.283588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.283642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.283652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.283657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.283662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.283672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.293613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.293656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.293668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.293673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.293678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.293688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.303712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.303765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.303775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.303780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.303784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.303794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.313553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.313596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.313606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.313612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.313617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.313627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.323721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.523 [2024-12-06 18:03:35.323765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.523 [2024-12-06 18:03:35.323775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.523 [2024-12-06 18:03:35.323780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.523 [2024-12-06 18:03:35.323785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.523 [2024-12-06 18:03:35.323795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-12-06 18:03:35.333736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-12-06 18:03:35.333777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-12-06 18:03:35.333787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-12-06 18:03:35.333792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-12-06 18:03:35.333800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.524 [2024-12-06 18:03:35.333810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-12-06 18:03:35.343797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.524 [2024-12-06 18:03:35.343844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.524 [2024-12-06 18:03:35.343854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.524 [2024-12-06 18:03:35.343859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.524 [2024-12-06 18:03:35.343864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.524 [2024-12-06 18:03:35.343874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.353802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.353889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.353899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.353904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.353909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.353920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.363815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.363858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.363868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.363873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.363878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.363888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.373838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.373888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.373897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.373903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.373908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.373918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.383888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.383939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.383949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.383954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.383959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.383969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.393888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.393934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.393943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.393948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.393953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.393963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.403931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.404002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.404011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.404017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.404021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.404032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.413949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.413999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.414009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.414014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.414019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.414029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.423977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.424024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.424038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.424044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.424049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.424060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.434013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.434060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.434070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.434075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.434080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.434090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.444012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.444056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.444066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.444071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.444076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.444087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.454047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.454089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.454099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.454109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.454113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.454124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.464128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.464178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.464188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.464193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.464204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.464214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.474118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.474216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.474226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.474231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.474236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.474247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.484128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.484217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.484227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.484232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.484237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.484248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.494160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.494201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.784 [2024-12-06 18:03:35.494211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.784 [2024-12-06 18:03:35.494216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.784 [2024-12-06 18:03:35.494221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.784 [2024-12-06 18:03:35.494231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.784 qpair failed and we were unable to recover it. 00:26:47.784 [2024-12-06 18:03:35.504232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.784 [2024-12-06 18:03:35.504282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.504292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.504297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.504302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.504312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.514193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.514242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.514252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.514257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.514262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.514272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.524235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.524327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.524337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.524343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.524348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.524359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.534278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.534328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.534337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.534343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.534348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.534358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.544308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.544365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.544375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.544380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.544385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.544395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.554338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.554384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.554396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.554401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.554406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.554416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.564330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.564378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.564388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.564393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.564398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.564409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.574410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.574452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.574462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.574467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.574472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.574482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.584425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.584491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.584501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.584506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.584511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.584522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.594403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.594452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.594461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.594469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.594474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.594484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:47.785 [2024-12-06 18:03:35.604459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.785 [2024-12-06 18:03:35.604505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.785 [2024-12-06 18:03:35.604515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.785 [2024-12-06 18:03:35.604520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.785 [2024-12-06 18:03:35.604525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:47.785 [2024-12-06 18:03:35.604535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:47.785 qpair failed and we were unable to recover it. 00:26:48.047 [2024-12-06 18:03:35.614374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.614433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.614442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.614448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.614453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.614463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.624551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.624631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.624641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.624646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.624651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.624662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.634560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.634612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.634622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.634627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.634632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.634642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.644569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.644616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.644626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.644631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.644636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.644647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.654446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.654492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.654502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.654507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.654512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.654522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.664647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.664695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.664704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.664710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.664714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.664725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.674629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.674682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.674691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.674697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.674701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.674711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.684671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.684718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.684728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.684734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.684738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.684749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.694671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.694763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.694773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.694778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.694783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.694794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.704785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.704880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.704899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.704906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.704911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.704926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.714761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.714812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.714830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.714836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.714842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.714856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.724791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.724848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.724859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.724869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.724874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.724885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.734718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.734780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.734791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.734796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.734801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.048 [2024-12-06 18:03:35.734812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.048 qpair failed and we were unable to recover it. 00:26:48.048 [2024-12-06 18:03:35.744833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.048 [2024-12-06 18:03:35.744920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.048 [2024-12-06 18:03:35.744930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.048 [2024-12-06 18:03:35.744935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.048 [2024-12-06 18:03:35.744940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.744952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.754880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.754964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.754974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.754979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.754984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.754995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.764762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.764807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.764817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.764822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.764827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.764840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.774897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.774940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.774950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.774956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.774960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.774971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.785041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.785090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.785103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.785109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.785114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.785125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.794969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.795010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.795020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.795026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.795031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.795041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.804995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.805043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.805052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.805058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.805063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.805073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.815054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.815127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.815137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.815142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.815146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.815157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.825088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.825142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.825153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.825158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.825163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.825174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.835054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.835098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.835112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.835117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.835122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.835133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.845079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.845122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.845132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.845137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.845142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.845153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.855093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.855141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.855153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.855158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.855163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.855174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.049 [2024-12-06 18:03:35.865218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.049 [2024-12-06 18:03:35.865270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.049 [2024-12-06 18:03:35.865280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.049 [2024-12-06 18:03:35.865286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.049 [2024-12-06 18:03:35.865290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.049 [2024-12-06 18:03:35.865301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.049 qpair failed and we were unable to recover it. 00:26:48.313 [2024-12-06 18:03:35.875206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.313 [2024-12-06 18:03:35.875252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.313 [2024-12-06 18:03:35.875261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.313 [2024-12-06 18:03:35.875267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.313 [2024-12-06 18:03:35.875271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.313 [2024-12-06 18:03:35.875282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.313 qpair failed and we were unable to recover it. 00:26:48.313 [2024-12-06 18:03:35.885248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.313 [2024-12-06 18:03:35.885319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.313 [2024-12-06 18:03:35.885353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.313 [2024-12-06 18:03:35.885359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.313 [2024-12-06 18:03:35.885364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.313 [2024-12-06 18:03:35.885381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.313 qpair failed and we were unable to recover it. 00:26:48.313 [2024-12-06 18:03:35.895267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.313 [2024-12-06 18:03:35.895333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.313 [2024-12-06 18:03:35.895343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.313 [2024-12-06 18:03:35.895349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.895356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.895367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.905207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.905255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.905265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.905270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.905275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.905285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.915344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.915392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.915401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.915407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.915411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.915421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.925191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.925238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.925248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.925253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.925257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.925268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.935344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.935384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.935394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.935399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.935404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.935414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.945357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.945400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.945409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.945415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.945419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.945430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.955410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.955452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.955462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.955467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.955471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.955482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.965397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.965432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.965442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.965447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.965452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.965462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.975433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.975478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.975487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.975493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.975497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.975507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.985443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.985485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.985497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.985502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.985507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.985517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:35.995505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:35.995551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:35.995561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:35.995566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:35.995571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:35.995581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:36.005518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:36.005560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:36.005569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:36.005575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:36.005579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:36.005589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:36.015555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:36.015594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:36.015603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:36.015609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:36.015613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.314 [2024-12-06 18:03:36.015624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.314 qpair failed and we were unable to recover it. 00:26:48.314 [2024-12-06 18:03:36.025577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.314 [2024-12-06 18:03:36.025616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.314 [2024-12-06 18:03:36.025625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.314 [2024-12-06 18:03:36.025631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.314 [2024-12-06 18:03:36.025638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.025648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.035599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.035646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.035656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.035661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.035666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.035676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.045628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.045670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.045679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.045684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.045689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.045699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.055671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.055708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.055718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.055724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.055728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.055738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.065660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.065698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.065708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.065713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.065718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.065728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.075736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.075778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.075789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.075794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.075799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.075809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.085607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.085645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.085655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.085660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.085665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.085676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.095765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.095806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.095816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.095821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.095826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.095836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.105776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.105815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.105825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.105830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.105835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.105845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.115819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.115864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.115885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.115892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.115898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.115912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.125841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.125908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.125926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.125933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.125938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.125952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.315 [2024-12-06 18:03:36.135835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.315 [2024-12-06 18:03:36.135881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.315 [2024-12-06 18:03:36.135899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.315 [2024-12-06 18:03:36.135905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.315 [2024-12-06 18:03:36.135910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.315 [2024-12-06 18:03:36.135924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.315 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.145902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.145943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.145954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.145960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.145964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.145976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.155943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.156030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.156040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.156050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.156055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.156065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.165850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.165893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.165905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.165910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.165915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.165926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.175983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.176022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.176033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.176038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.176043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.176054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.186006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.186048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.186057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.186063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.186067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.186078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.196019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.196062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.196072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.196077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.196082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.196095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.206068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.206112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.206122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.206127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.206131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.206142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.216050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.216088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.216098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.216108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.216113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.216123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.225970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.226011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.226021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.226026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.226031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.226041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.576 qpair failed and we were unable to recover it. 00:26:48.576 [2024-12-06 18:03:36.236140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.576 [2024-12-06 18:03:36.236193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.576 [2024-12-06 18:03:36.236203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.576 [2024-12-06 18:03:36.236208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.576 [2024-12-06 18:03:36.236213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.576 [2024-12-06 18:03:36.236224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.246132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.246175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.246185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.246190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.246195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.246205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.256151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.256189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.256198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.256204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.256208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.256219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.266201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.266246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.266257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.266262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.266267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.266278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.276263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.276305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.276315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.276320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.276325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.276336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.286122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.286161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.286171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.286179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.286184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.286195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.296260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.296305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.296314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.296319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.296324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.296334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.306309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.306352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.306362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.306367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.306372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.306382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.316373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.316415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.316425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.316430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.316434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.316445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.326396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.326438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.326448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.326453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.326458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.326474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.336389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.336426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.336435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.336441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.336445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.336455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.346281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.346319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.346329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.346334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.346339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.346349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.356469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.356515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.356525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.356530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.356535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.356546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.577 qpair failed and we were unable to recover it. 00:26:48.577 [2024-12-06 18:03:36.366336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.577 [2024-12-06 18:03:36.366375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.577 [2024-12-06 18:03:36.366385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.577 [2024-12-06 18:03:36.366390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.577 [2024-12-06 18:03:36.366395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.577 [2024-12-06 18:03:36.366405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.578 qpair failed and we were unable to recover it. 00:26:48.578 [2024-12-06 18:03:36.376363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.578 [2024-12-06 18:03:36.376403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.578 [2024-12-06 18:03:36.376413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.578 [2024-12-06 18:03:36.376418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.578 [2024-12-06 18:03:36.376423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.578 [2024-12-06 18:03:36.376433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.578 qpair failed and we were unable to recover it. 00:26:48.578 [2024-12-06 18:03:36.386531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.578 [2024-12-06 18:03:36.386572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.578 [2024-12-06 18:03:36.386582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.578 [2024-12-06 18:03:36.386587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.578 [2024-12-06 18:03:36.386592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.578 [2024-12-06 18:03:36.386603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.578 qpair failed and we were unable to recover it. 00:26:48.578 [2024-12-06 18:03:36.396565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.578 [2024-12-06 18:03:36.396652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.578 [2024-12-06 18:03:36.396662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.578 [2024-12-06 18:03:36.396667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.578 [2024-12-06 18:03:36.396672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.578 [2024-12-06 18:03:36.396682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.578 qpair failed and we were unable to recover it. 00:26:48.838 [2024-12-06 18:03:36.406575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.838 [2024-12-06 18:03:36.406616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.838 [2024-12-06 18:03:36.406626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.838 [2024-12-06 18:03:36.406631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.838 [2024-12-06 18:03:36.406636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.838 [2024-12-06 18:03:36.406646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.838 qpair failed and we were unable to recover it. 00:26:48.838 [2024-12-06 18:03:36.416602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.838 [2024-12-06 18:03:36.416643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.838 [2024-12-06 18:03:36.416656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.838 [2024-12-06 18:03:36.416662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.838 [2024-12-06 18:03:36.416667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.838 [2024-12-06 18:03:36.416677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.838 qpair failed and we were unable to recover it. 00:26:48.838 [2024-12-06 18:03:36.426634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.838 [2024-12-06 18:03:36.426682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.838 [2024-12-06 18:03:36.426691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.838 [2024-12-06 18:03:36.426697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.838 [2024-12-06 18:03:36.426701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.838 [2024-12-06 18:03:36.426712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.838 qpair failed and we were unable to recover it. 00:26:48.838 [2024-12-06 18:03:36.436691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.838 [2024-12-06 18:03:36.436738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.838 [2024-12-06 18:03:36.436747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.838 [2024-12-06 18:03:36.436753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.838 [2024-12-06 18:03:36.436757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.838 [2024-12-06 18:03:36.436768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.838 qpair failed and we were unable to recover it. 00:26:48.838 [2024-12-06 18:03:36.446734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.838 [2024-12-06 18:03:36.446772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.838 [2024-12-06 18:03:36.446781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.838 [2024-12-06 18:03:36.446786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.446791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.446801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.456602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.456663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.456673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.456678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.456686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.456696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.466739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.466780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.466790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.466795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.466800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.466810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.476762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.476801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.476811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.476816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.476821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.476831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.486805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.486847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.486856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.486862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.486866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.486876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.496830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.496869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.496879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.496884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.496889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.496899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.506732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.506770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.506780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.506785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.506789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.506800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.516891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.516938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.516948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.516953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.516958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.516969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.526910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.526946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.526956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.526961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.526966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.526976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.536802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.536840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.536849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.536854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.536859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.536869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.546965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.547012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.547025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.547030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.547034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.547044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.557002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.557046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.557055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.557060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.557065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.557075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.567023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.567063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.567073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.567078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.567083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.839 [2024-12-06 18:03:36.567093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.839 qpair failed and we were unable to recover it. 00:26:48.839 [2024-12-06 18:03:36.577054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.839 [2024-12-06 18:03:36.577095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.839 [2024-12-06 18:03:36.577108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.839 [2024-12-06 18:03:36.577113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.839 [2024-12-06 18:03:36.577118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.577128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.587081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.587129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.587139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.587144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.587151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.587162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.597094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.597139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.597149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.597154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.597159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.597169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.607138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.607178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.607188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.607193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.607198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.607208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.617147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.617190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.617200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.617206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.617211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.617222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.627136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.627175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.627185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.627190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.627195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.627205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.637250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.637293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.637302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.637308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.637312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.637323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.647201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.647245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.647254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.647260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.647264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.647275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:48.840 [2024-12-06 18:03:36.657215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.840 [2024-12-06 18:03:36.657257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.840 [2024-12-06 18:03:36.657266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.840 [2024-12-06 18:03:36.657271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.840 [2024-12-06 18:03:36.657276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:48.840 [2024-12-06 18:03:36.657286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:48.840 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.667274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.667313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.667322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.667327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.667332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.667342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.677289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.677334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.677346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.677351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.677356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.677366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.687336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.687377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.687387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.687392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.687397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.687407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.697354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.697392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.697402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.697407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.697412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.697422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.707390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.707434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.707444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.707449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.707454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.707464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.717431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.717473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.717482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.717490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.717495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.717505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.727431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.727473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.727483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.727488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.727493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.727503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.737443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.737487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.737496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.737501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.737506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.737516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.747360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.100 [2024-12-06 18:03:36.747400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.100 [2024-12-06 18:03:36.747410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.100 [2024-12-06 18:03:36.747415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.100 [2024-12-06 18:03:36.747419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.100 [2024-12-06 18:03:36.747430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.100 qpair failed and we were unable to recover it. 00:26:49.100 [2024-12-06 18:03:36.757550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.757591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.757601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.757606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.757610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.757623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.767505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.767543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.767553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.767558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.767562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.767573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.777560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.777598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.777608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.777613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.777618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.777628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.787604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.787642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.787652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.787657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.787661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.787672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.797650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.797692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.797702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.797707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.797712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.797721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.807662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.807703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.807712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.807718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.807722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.807732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.817684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.817723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.817732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.817737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.817742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.817752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.827711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.827751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.827761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.827766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.827770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.827780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.837716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.837795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.837804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.837810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.837814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.837824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.847786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.847833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.847851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.847861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.847867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.847881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.857802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.857850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.857868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.857874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.857879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.857893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.867809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.867860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.867871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.867876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.867881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.867892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.877848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.101 [2024-12-06 18:03:36.877901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.101 [2024-12-06 18:03:36.877911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.101 [2024-12-06 18:03:36.877917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.101 [2024-12-06 18:03:36.877921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.101 [2024-12-06 18:03:36.877932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.101 qpair failed and we were unable to recover it. 00:26:49.101 [2024-12-06 18:03:36.887884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.102 [2024-12-06 18:03:36.887946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.102 [2024-12-06 18:03:36.887956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.102 [2024-12-06 18:03:36.887961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.102 [2024-12-06 18:03:36.887966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.102 [2024-12-06 18:03:36.887980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.102 qpair failed and we were unable to recover it. 00:26:49.102 [2024-12-06 18:03:36.897862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.102 [2024-12-06 18:03:36.897903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.102 [2024-12-06 18:03:36.897912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.102 [2024-12-06 18:03:36.897918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.102 [2024-12-06 18:03:36.897922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.102 [2024-12-06 18:03:36.897933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.102 qpair failed and we were unable to recover it. 00:26:49.102 [2024-12-06 18:03:36.907923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.102 [2024-12-06 18:03:36.907976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.102 [2024-12-06 18:03:36.907986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.102 [2024-12-06 18:03:36.907992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.102 [2024-12-06 18:03:36.907996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.102 [2024-12-06 18:03:36.908007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.102 qpair failed and we were unable to recover it. 00:26:49.102 [2024-12-06 18:03:36.917963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.102 [2024-12-06 18:03:36.918005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.102 [2024-12-06 18:03:36.918015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.102 [2024-12-06 18:03:36.918021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.102 [2024-12-06 18:03:36.918025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.102 [2024-12-06 18:03:36.918036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.102 qpair failed and we were unable to recover it. 00:26:49.362 [2024-12-06 18:03:36.927842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.927887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.927897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.927903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.927907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.927918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.938019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.938058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.938068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.938073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.938078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.938089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.948129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.948220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.948230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.948235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.948240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.948251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.958087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.958132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.958142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.958147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.958152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.958163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.968123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.968172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.968182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.968188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.968192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.968203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.978155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.978200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.978212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.978217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.978222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.978232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.988147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.988192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.988203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.988209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.988213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.988224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:36.998166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:36.998209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:36.998219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:36.998224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:36.998229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:36.998239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:37.008195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:37.008244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:37.008254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:37.008259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:37.008264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:37.008274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:37.018199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:37.018240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:37.018250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:37.018255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:37.018266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:37.018277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:37.028229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:37.028271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:37.028281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:37.028286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:37.028291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:37.028302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:37.038274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:37.038321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:37.038331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:37.038337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:37.038342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:37.038352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:37.048293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.363 [2024-12-06 18:03:37.048333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.363 [2024-12-06 18:03:37.048342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.363 [2024-12-06 18:03:37.048347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.363 [2024-12-06 18:03:37.048353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.363 [2024-12-06 18:03:37.048363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.363 qpair failed and we were unable to recover it. 00:26:49.363 [2024-12-06 18:03:37.058338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.058378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.058388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.058393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.058398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.058408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.068339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.068381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.068391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.068396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.068401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.068411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.078254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.078300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.078309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.078314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.078319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.078330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.088395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.088440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.088450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.088455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.088460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.088471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.098417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.098502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.098511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.098516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.098523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.098533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.108401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.108443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.108455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.108460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.108465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.108476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.118509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.118549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.118559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.118564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.118569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.118579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.128367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.128405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.128414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.128419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.128424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.128435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.138509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.138546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.138555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.138561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.138565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.138576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.148435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.148476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.148485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.148491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.148498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.148509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.158558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.158604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.158614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.158619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.158623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.158634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.168685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.168740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.168750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.168755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.168760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.168770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.364 [2024-12-06 18:03:37.178631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.364 [2024-12-06 18:03:37.178687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.364 [2024-12-06 18:03:37.178697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.364 [2024-12-06 18:03:37.178703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.364 [2024-12-06 18:03:37.178707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.364 [2024-12-06 18:03:37.178717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.364 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.188578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.188635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.188645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.188650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.188655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.188665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.198714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.198753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.198763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.198768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.198773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.198783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.208718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.208757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.208768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.208773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.208778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.208789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.218723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.218769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.218779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.218784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.218789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.218799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.228784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.228826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.228836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.228841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.228846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.228856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.238811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.238854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.238866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.238871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.238876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.238887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.248822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.248868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.248878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.248883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.248888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.248898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.258730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.258772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.258781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.258786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.258791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.258801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.268902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.268994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.269003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.269009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.269014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.269024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.278915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.278959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.278968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.278977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.278982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.278992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.288921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.288964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.288974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.288979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.288984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.626 [2024-12-06 18:03:37.288994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.626 qpair failed and we were unable to recover it. 00:26:49.626 [2024-12-06 18:03:37.298966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.626 [2024-12-06 18:03:37.299005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.626 [2024-12-06 18:03:37.299015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.626 [2024-12-06 18:03:37.299021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.626 [2024-12-06 18:03:37.299026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.299036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.308974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.309016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.309027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.309032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.309037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.309047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.319026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.319069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.319079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.319085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.319089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.319113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.329042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.329082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.329092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.329097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.329105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.329115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.339070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.339123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.339133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.339138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.339143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.339154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.349025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.349086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.349097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.349106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.349111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.349122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.359124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.359177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.359187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.359192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.359197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.359207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.369161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.369205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.369215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.369220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.369225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.369236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.379183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.379227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.379237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.379242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.379247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.379257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.389243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.389315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.389325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.389330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.389335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.389345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.399222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.399271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.399281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.399286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.399290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.399301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.409255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.409290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.409300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.409307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.409312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.409322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.419254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.419296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.419305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.419310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.419315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.419326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.627 qpair failed and we were unable to recover it. 00:26:49.627 [2024-12-06 18:03:37.429202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.627 [2024-12-06 18:03:37.429246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.627 [2024-12-06 18:03:37.429255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.627 [2024-12-06 18:03:37.429260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.627 [2024-12-06 18:03:37.429265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.627 [2024-12-06 18:03:37.429275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.628 qpair failed and we were unable to recover it. 00:26:49.628 [2024-12-06 18:03:37.439343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.628 [2024-12-06 18:03:37.439388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.628 [2024-12-06 18:03:37.439397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.628 [2024-12-06 18:03:37.439402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.628 [2024-12-06 18:03:37.439407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.628 [2024-12-06 18:03:37.439417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.628 qpair failed and we were unable to recover it. 00:26:49.628 [2024-12-06 18:03:37.449337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.628 [2024-12-06 18:03:37.449376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.628 [2024-12-06 18:03:37.449385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.628 [2024-12-06 18:03:37.449391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.628 [2024-12-06 18:03:37.449395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.628 [2024-12-06 18:03:37.449408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.628 qpair failed and we were unable to recover it. 00:26:49.888 [2024-12-06 18:03:37.459403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.888 [2024-12-06 18:03:37.459440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.888 [2024-12-06 18:03:37.459449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.888 [2024-12-06 18:03:37.459455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.888 [2024-12-06 18:03:37.459459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.888 [2024-12-06 18:03:37.459469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.888 qpair failed and we were unable to recover it. 00:26:49.888 [2024-12-06 18:03:37.469411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.888 [2024-12-06 18:03:37.469454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.888 [2024-12-06 18:03:37.469464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.888 [2024-12-06 18:03:37.469469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.888 [2024-12-06 18:03:37.469474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.888 [2024-12-06 18:03:37.469484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.888 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.479451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.479517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.479527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.479532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.479537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.479547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.489464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.489500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.489510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.489515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.489519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.489529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.499500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.499573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.499583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.499588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.499592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.499603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.509533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.509599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.509609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.509614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.509619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.509629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.519528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.519621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.519631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.519636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.519641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.519651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.529562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.529602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.529612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.529617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.529622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.529633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.539603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.539644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.539658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.539663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.539668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.539679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.549493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.549535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.549545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.549550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.549555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.549565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.559675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.559719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.559728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.559734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.559738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.559749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.569566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.569627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.569637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.569643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.569648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.569658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.579688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.579730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.579740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.579745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.579752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.579763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.589742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.589825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.589835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.589840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.589845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.589855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.599789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.889 [2024-12-06 18:03:37.599830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.889 [2024-12-06 18:03:37.599840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.889 [2024-12-06 18:03:37.599846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.889 [2024-12-06 18:03:37.599850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.889 [2024-12-06 18:03:37.599860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.889 qpair failed and we were unable to recover it. 00:26:49.889 [2024-12-06 18:03:37.609788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.609826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.609836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.609841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.609846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.609856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.619821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.619855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.619864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.619870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.619874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.619885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.629824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.629863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.629873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.629878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.629883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.629893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.639874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.639917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.639927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.639932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.639937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.639947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.649881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.649925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.649935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.649941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.649945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.649956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.659925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.659967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.659977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.659982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.659987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.659997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.669952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.669998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.670011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.670016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.670021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.670032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.679987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.680027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.680037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.680043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.680047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.680058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.690021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.690060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.690069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.690075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.690080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.690089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.699899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.699938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.699948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.699954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.699959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.699969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:49.890 [2024-12-06 18:03:37.710024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.890 [2024-12-06 18:03:37.710064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.890 [2024-12-06 18:03:37.710074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.890 [2024-12-06 18:03:37.710079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.890 [2024-12-06 18:03:37.710089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:49.890 [2024-12-06 18:03:37.710103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:49.890 qpair failed and we were unable to recover it. 00:26:50.150 [2024-12-06 18:03:37.720084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.150 [2024-12-06 18:03:37.720132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.150 [2024-12-06 18:03:37.720142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.150 [2024-12-06 18:03:37.720147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.150 [2024-12-06 18:03:37.720152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.150 [2024-12-06 18:03:37.720162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.150 qpair failed and we were unable to recover it. 00:26:50.150 [2024-12-06 18:03:37.730125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.150 [2024-12-06 18:03:37.730208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.150 [2024-12-06 18:03:37.730217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.150 [2024-12-06 18:03:37.730222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.150 [2024-12-06 18:03:37.730227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.150 [2024-12-06 18:03:37.730238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.150 qpair failed and we were unable to recover it. 00:26:50.150 [2024-12-06 18:03:37.740138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.150 [2024-12-06 18:03:37.740177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.150 [2024-12-06 18:03:37.740186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.150 [2024-12-06 18:03:37.740192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.150 [2024-12-06 18:03:37.740196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.150 [2024-12-06 18:03:37.740207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.150 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.750024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.750067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.750077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.750082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.750087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.750097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.760120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.760167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.760177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.760182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.760187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.760197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.770229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.770271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.770281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.770286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.770290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.770301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.780109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.780165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.780176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.780181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.780186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.780196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.790245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.790286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.790296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.790302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.790307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.790317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.800295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.800373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.800385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.800390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.800395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.800406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.810325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.810362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.810372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.810377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.810382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.810392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.820206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.820246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.820255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.820261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.820265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.820276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.830385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.830428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.830437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.830442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.830447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.830457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.840407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.840452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.840461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.840469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.840473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.840483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.850283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.850321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.850330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.850336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.850340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.850350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.860432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.860475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.860484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.860490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.860495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.860505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.870474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.870514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.870523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.870528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.870533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.870543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.880530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.880576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.880585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.880590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.880595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.880608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.890395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.890437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.890446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.890451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.890456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.890466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.900552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.900596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.900605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.900611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.900615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.900625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.910603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.910673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.910682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.910687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.910692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.910702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.920624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.920665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.920675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.920680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.920685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.920695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.930631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.930669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.930679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.930685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.930689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.930699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.940663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.940711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.940729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.940735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.940740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.940756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.950698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.950737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.950747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.950753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.950758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.950768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.960741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.960779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.960789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.960794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.151 [2024-12-06 18:03:37.960799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.151 [2024-12-06 18:03:37.960809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.151 qpair failed and we were unable to recover it. 00:26:50.151 [2024-12-06 18:03:37.970750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.151 [2024-12-06 18:03:37.970789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.151 [2024-12-06 18:03:37.970798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.151 [2024-12-06 18:03:37.970807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.152 [2024-12-06 18:03:37.970812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.152 [2024-12-06 18:03:37.970822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.152 qpair failed and we were unable to recover it. 00:26:50.411 [2024-12-06 18:03:37.980645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:37.980706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:37.980716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:37.980721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:37.980726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:37.980736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:37.990812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:37.990863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:37.990872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:37.990877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:37.990882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:37.990892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.000832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.000898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.000908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.000914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.000918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.000929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.010877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.010962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.010972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.010977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.010983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.010996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.020751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.020794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.020804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.020809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.020814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.020824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.030912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.030955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.030964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.030970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.030974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.030985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.040961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.041001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.041010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.041016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.041020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.041031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.050925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.050963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.050973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.050978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.050983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.050993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.060992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.061034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.061044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.061049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.061054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.061064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.071018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.071068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.071077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.071082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.071087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.071097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.081068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.081138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.081148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.081153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.081157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.081168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.091063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.412 [2024-12-06 18:03:38.091125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.412 [2024-12-06 18:03:38.091135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.412 [2024-12-06 18:03:38.091140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.412 [2024-12-06 18:03:38.091145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.412 [2024-12-06 18:03:38.091155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.412 qpair failed and we were unable to recover it. 00:26:50.412 [2024-12-06 18:03:38.101056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.101094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.101109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.101114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.101119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.101129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.111025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.111063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.111072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.111078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.111082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.111092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.121150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.121191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.121201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.121206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.121210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.121220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.131181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.131218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.131228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.131233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.131238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.131248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.141185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.141225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.141235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.141240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.141247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.141257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.151098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.151145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.151155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.151160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.151165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.151176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.161282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.161323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.161333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.161338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.161343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.161353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.171275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.171316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.171326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.171331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.171336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.171346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.181325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.181365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.181374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.181379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.181384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.181394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.191202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.191250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.191260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.191265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.191270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.191280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.201428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.201499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.201509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.201514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.201519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.201529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.211378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.211417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.413 [2024-12-06 18:03:38.211427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.413 [2024-12-06 18:03:38.211432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.413 [2024-12-06 18:03:38.211437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.413 [2024-12-06 18:03:38.211447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.413 qpair failed and we were unable to recover it. 00:26:50.413 [2024-12-06 18:03:38.221388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.413 [2024-12-06 18:03:38.221427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.414 [2024-12-06 18:03:38.221437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.414 [2024-12-06 18:03:38.221442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.414 [2024-12-06 18:03:38.221447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.414 [2024-12-06 18:03:38.221457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.414 qpair failed and we were unable to recover it. 00:26:50.414 [2024-12-06 18:03:38.231424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.414 [2024-12-06 18:03:38.231511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.414 [2024-12-06 18:03:38.231523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.414 [2024-12-06 18:03:38.231529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.414 [2024-12-06 18:03:38.231533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.414 [2024-12-06 18:03:38.231543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.414 qpair failed and we were unable to recover it. 00:26:50.674 [2024-12-06 18:03:38.241463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.674 [2024-12-06 18:03:38.241504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.674 [2024-12-06 18:03:38.241514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.674 [2024-12-06 18:03:38.241519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.674 [2024-12-06 18:03:38.241523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.674 [2024-12-06 18:03:38.241534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.674 qpair failed and we were unable to recover it. 00:26:50.674 [2024-12-06 18:03:38.251471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.674 [2024-12-06 18:03:38.251508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.674 [2024-12-06 18:03:38.251518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.674 [2024-12-06 18:03:38.251523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.674 [2024-12-06 18:03:38.251528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.674 [2024-12-06 18:03:38.251538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.674 qpair failed and we were unable to recover it. 00:26:50.674 [2024-12-06 18:03:38.261528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.674 [2024-12-06 18:03:38.261567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.674 [2024-12-06 18:03:38.261576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.261582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.261586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.261597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.271550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.271591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.271600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.271605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.271613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.271623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.281591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.281636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.281645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.281650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.281655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.281665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.291622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.291661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.291671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.291676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.291680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.291691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.301513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.301552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.301562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.301567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.301571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.301581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.311640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.311680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.311689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.311694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.311699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.311709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.321715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.321756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.321766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.321771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.321776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.321786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.331584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.331624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.331634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.331639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.331643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.331654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.341730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.341768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.341777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.341783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.341787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.341797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.351742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.351782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.351791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.351796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.675 [2024-12-06 18:03:38.351801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.675 [2024-12-06 18:03:38.351811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.675 qpair failed and we were unable to recover it. 00:26:50.675 [2024-12-06 18:03:38.361818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.675 [2024-12-06 18:03:38.361861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.675 [2024-12-06 18:03:38.361871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.675 [2024-12-06 18:03:38.361876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.361881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.361891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.371815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.371852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.371861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.371867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.371871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.371881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.381843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.381883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.381892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.381898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.381902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.381913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.391884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.391924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.391934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.391939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.391944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.391955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.401784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.401826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.401836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.401844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.401848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.401859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.411956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.411996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.412006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.412011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.412016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.412026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.421948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.421985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.421995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.422000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.422004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.422015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.432005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.432048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.432058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.432063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.432067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.432077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.442045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.442089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.442102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.442108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.442112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.442128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.452040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.452088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.452098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.452107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.452112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.676 [2024-12-06 18:03:38.452122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.676 qpair failed and we were unable to recover it. 00:26:50.676 [2024-12-06 18:03:38.462091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.676 [2024-12-06 18:03:38.462135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.676 [2024-12-06 18:03:38.462146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.676 [2024-12-06 18:03:38.462151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.676 [2024-12-06 18:03:38.462156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.677 [2024-12-06 18:03:38.462167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.677 qpair failed and we were unable to recover it. 00:26:50.677 [2024-12-06 18:03:38.472114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.677 [2024-12-06 18:03:38.472191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.677 [2024-12-06 18:03:38.472200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.677 [2024-12-06 18:03:38.472206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.677 [2024-12-06 18:03:38.472210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.677 [2024-12-06 18:03:38.472221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.677 qpair failed and we were unable to recover it. 00:26:50.677 [2024-12-06 18:03:38.482127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.677 [2024-12-06 18:03:38.482174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.677 [2024-12-06 18:03:38.482183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.677 [2024-12-06 18:03:38.482188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.677 [2024-12-06 18:03:38.482193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.677 [2024-12-06 18:03:38.482204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.677 qpair failed and we were unable to recover it. 00:26:50.677 [2024-12-06 18:03:38.492130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.677 [2024-12-06 18:03:38.492174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.677 [2024-12-06 18:03:38.492183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.677 [2024-12-06 18:03:38.492189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.677 [2024-12-06 18:03:38.492193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.677 [2024-12-06 18:03:38.492204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.677 qpair failed and we were unable to recover it. 00:26:50.936 [2024-12-06 18:03:38.502190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.936 [2024-12-06 18:03:38.502231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.936 [2024-12-06 18:03:38.502241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.936 [2024-12-06 18:03:38.502246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.936 [2024-12-06 18:03:38.502251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.936 [2024-12-06 18:03:38.502261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-12-06 18:03:38.512246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.936 [2024-12-06 18:03:38.512315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.936 [2024-12-06 18:03:38.512324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.936 [2024-12-06 18:03:38.512330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.936 [2024-12-06 18:03:38.512334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.936 [2024-12-06 18:03:38.512345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-12-06 18:03:38.522246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.936 [2024-12-06 18:03:38.522294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.936 [2024-12-06 18:03:38.522304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.936 [2024-12-06 18:03:38.522309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.936 [2024-12-06 18:03:38.522314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.936 [2024-12-06 18:03:38.522324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-12-06 18:03:38.532128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.936 [2024-12-06 18:03:38.532166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.936 [2024-12-06 18:03:38.532176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.936 [2024-12-06 18:03:38.532185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.936 [2024-12-06 18:03:38.532189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.936 [2024-12-06 18:03:38.532200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.936 qpair failed and we were unable to recover it. 00:26:50.936 [2024-12-06 18:03:38.542160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.542204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.542214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.542220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.542224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.542235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.552298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.552338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.552347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.552352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.552357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.552367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.562331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.562374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.562383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.562389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.562393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.562404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.572384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.572423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.572433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.572438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.572443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.572456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.582420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.582462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.582472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.582477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.582482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.582492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.592444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.592482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.592491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.592497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.592501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.592511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.602473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.602515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.602524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.602529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.602534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.602544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.612458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.612499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.612508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.612513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.612518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.612529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.622514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.622560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.622569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.622575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.622579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.622590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.632531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.632580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.632589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.632594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.632599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.632609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.642577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.642615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.642625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.642630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.642635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.642645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.652599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.652639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.652648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.652653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.652658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.652668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.662621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.662658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.662670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.662675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.662680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.662690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.672649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.672689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.672698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.672704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.672709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.672719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.682674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.682718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.682727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.682732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.682737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.682747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.692695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.692740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.692750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.692755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.692761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.692771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.702710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.702750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.702760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.702765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.702773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.702783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.712609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.712655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.712664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.712669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.712674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.712685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.722776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.722828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.722837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.722843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.722847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.722858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.732801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.732844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.732862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.732869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.732875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.937 [2024-12-06 18:03:38.732889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.937 qpair failed and we were unable to recover it. 00:26:50.937 [2024-12-06 18:03:38.742838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.937 [2024-12-06 18:03:38.742892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.937 [2024-12-06 18:03:38.742911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.937 [2024-12-06 18:03:38.742917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.937 [2024-12-06 18:03:38.742923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.938 [2024-12-06 18:03:38.742937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.938 qpair failed and we were unable to recover it. 00:26:50.938 [2024-12-06 18:03:38.752850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.938 [2024-12-06 18:03:38.752904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.938 [2024-12-06 18:03:38.752923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.938 [2024-12-06 18:03:38.752929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.938 [2024-12-06 18:03:38.752934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:50.938 [2024-12-06 18:03:38.752948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:50.938 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.762909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.762956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.762975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.762981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.762987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.763001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.772900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.772964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.772976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.772981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.772986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.772997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.782923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.782971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.782981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.782987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.782992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.783002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.792964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.793006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.793020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.793025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.793030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.793041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.803002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.803044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.803054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.803059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.803064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.803075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.813018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.813060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.813070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.813075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.813080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.813090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.822906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.822945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.822956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.822962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.822967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.822978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.833070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.833114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.833125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.833130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.833138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.833149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.842971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.843016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.843026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.843031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.843036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.843046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.853097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.853137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.853147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.853153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.853157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.853168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.863154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.863195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.863205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.863210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.863215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.863225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.197 qpair failed and we were unable to recover it. 00:26:51.197 [2024-12-06 18:03:38.873177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.197 [2024-12-06 18:03:38.873218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.197 [2024-12-06 18:03:38.873228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.197 [2024-12-06 18:03:38.873233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.197 [2024-12-06 18:03:38.873237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.197 [2024-12-06 18:03:38.873248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.883195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.883240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.883249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.883255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.883259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.883270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.893237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.893285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.893295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.893300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.893305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.893315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.903243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.903284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.903294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.903299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.903304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.903314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.913198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.913239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.913249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.913254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.913259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.913269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.923342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.923390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.923400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.923405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.923410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.923420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.933210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.933248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.933258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.933263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.933268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.933279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.943252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.943293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.943303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.943308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.943313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.943323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.953276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.953316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.953327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.953332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.953337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.953348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.963443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.963483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.963493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.963501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.963505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.963516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.973311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.973353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.973363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.973369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.973373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.973383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.983491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.983529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.983539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.983544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.983549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.983560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:38.993405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:38.993460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:38.993470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:38.993475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:38.993480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:38.993491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:39.003523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:39.003564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:39.003574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:39.003579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:39.003584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:39.003597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.198 [2024-12-06 18:03:39.013563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.198 [2024-12-06 18:03:39.013613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.198 [2024-12-06 18:03:39.013623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.198 [2024-12-06 18:03:39.013628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.198 [2024-12-06 18:03:39.013633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.198 [2024-12-06 18:03:39.013643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.198 qpair failed and we were unable to recover it. 00:26:51.459 [2024-12-06 18:03:39.023601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.459 [2024-12-06 18:03:39.023641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.459 [2024-12-06 18:03:39.023651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.459 [2024-12-06 18:03:39.023656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.459 [2024-12-06 18:03:39.023661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.459 [2024-12-06 18:03:39.023672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.459 qpair failed and we were unable to recover it. 00:26:51.459 [2024-12-06 18:03:39.033627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.459 [2024-12-06 18:03:39.033678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.459 [2024-12-06 18:03:39.033688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.459 [2024-12-06 18:03:39.033693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.459 [2024-12-06 18:03:39.033698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.459 [2024-12-06 18:03:39.033709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.459 qpair failed and we were unable to recover it. 00:26:51.459 [2024-12-06 18:03:39.043669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.459 [2024-12-06 18:03:39.043719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.459 [2024-12-06 18:03:39.043729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.459 [2024-12-06 18:03:39.043734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.459 [2024-12-06 18:03:39.043739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.459 [2024-12-06 18:03:39.043749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.459 qpair failed and we were unable to recover it. 00:26:51.459 [2024-12-06 18:03:39.053677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.459 [2024-12-06 18:03:39.053725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.459 [2024-12-06 18:03:39.053734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.459 [2024-12-06 18:03:39.053740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.459 [2024-12-06 18:03:39.053744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.459 [2024-12-06 18:03:39.053755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.459 qpair failed and we were unable to recover it. 00:26:51.459 [2024-12-06 18:03:39.063713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.459 [2024-12-06 18:03:39.063754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.459 [2024-12-06 18:03:39.063764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.459 [2024-12-06 18:03:39.063769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.063774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.063784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.073745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.073784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.073794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.073799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.073804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.073814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.083764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.083804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.083814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.083819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.083823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.083834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.093776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.093815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.093828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.093833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.093838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.093848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.103668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.103707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.103716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.103722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.103726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.103737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.113842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.113883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.113893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.113898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.113903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.113913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.123868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.123915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.123933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.123940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.123945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.123959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.133904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.133949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.133967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.133974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.133979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.133996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.143809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.143849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.143861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.143867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.143871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.143883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.153816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.153854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.153864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.153870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.153875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.153885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.163990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.164035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.164044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.164049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.164054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.164064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.174003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.174040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.174050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.174055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.174060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.174070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.184026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.184069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.184079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.184084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.184089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.184102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.193928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.193970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.193980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.193985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.193990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.194001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.204108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.204148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.204158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.204163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.204168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.204178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.214108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.214152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.214161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.214167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.214172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.214182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.224131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.224174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.224190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.224195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.224200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.224210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.234153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.234195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.234205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.234210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.234215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.234225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.244250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.244312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.244322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.244327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.244331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.244342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.254194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.254236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.254245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.254251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.254256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.254266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.264218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.264261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.264270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.264275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.264282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.460 [2024-12-06 18:03:39.264293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.460 qpair failed and we were unable to recover it. 00:26:51.460 [2024-12-06 18:03:39.274260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.460 [2024-12-06 18:03:39.274335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.460 [2024-12-06 18:03:39.274345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.460 [2024-12-06 18:03:39.274350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.460 [2024-12-06 18:03:39.274354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.461 [2024-12-06 18:03:39.274365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.461 qpair failed and we were unable to recover it. 00:26:51.461 [2024-12-06 18:03:39.284323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.461 [2024-12-06 18:03:39.284368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.461 [2024-12-06 18:03:39.284378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.461 [2024-12-06 18:03:39.284383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.461 [2024-12-06 18:03:39.284388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.461 [2024-12-06 18:03:39.284398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.461 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.294345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.294387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.294396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.294401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.294406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.294416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.304346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.304391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.304400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.304406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.304410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.304420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.314358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.314397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.314407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.314412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.314417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.314426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.324415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.324483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.324493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.324499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.324503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.324514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.334306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.334348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.334357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.334363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.334367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.334378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.344462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.344501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.344510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.344515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.344520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.344530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.354503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.354547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.354559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.354565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.354569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.354579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.364549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.364590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.364599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.364605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.364609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.364619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.374548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.374588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.374597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.374602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.374607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.374617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.384578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.384660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.384669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.384674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.384679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.384689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.394467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.394506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.394516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.394524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.394529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.394539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.404657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.720 [2024-12-06 18:03:39.404702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.720 [2024-12-06 18:03:39.404712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.720 [2024-12-06 18:03:39.404717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.720 [2024-12-06 18:03:39.404722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.720 [2024-12-06 18:03:39.404732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.720 qpair failed and we were unable to recover it. 00:26:51.720 [2024-12-06 18:03:39.414659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.414697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.414707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.414712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.414717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.414727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.424678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.424720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.424729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.424734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.424739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.424749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.434700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.434745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.434755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.434760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.434765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.434776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.444724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.444766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.444776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.444782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.444787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.444798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.454759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.454842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.454852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.454857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.454863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.454873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.464797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.464843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.464862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.464868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.464874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.464889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.474675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.474716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.474727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.474732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.474737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.474748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.484810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.484867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.484877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.484882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.484887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.484897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.494863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.494908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.494918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.494923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.494928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.494938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.504920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.505006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.505024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.505030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.505036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.505050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.514928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.514971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.514983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.514988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.514993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.515004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.524951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.524997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.525007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.525016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.525020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.525031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.721 [2024-12-06 18:03:39.534963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.721 [2024-12-06 18:03:39.535007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.721 [2024-12-06 18:03:39.535017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.721 [2024-12-06 18:03:39.535022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.721 [2024-12-06 18:03:39.535027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.721 [2024-12-06 18:03:39.535037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.721 qpair failed and we were unable to recover it. 00:26:51.722 [2024-12-06 18:03:39.545015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.722 [2024-12-06 18:03:39.545079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.722 [2024-12-06 18:03:39.545089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.722 [2024-12-06 18:03:39.545094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.722 [2024-12-06 18:03:39.545102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.722 [2024-12-06 18:03:39.545113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.722 qpair failed and we were unable to recover it. 00:26:51.981 [2024-12-06 18:03:39.555065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.981 [2024-12-06 18:03:39.555115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.981 [2024-12-06 18:03:39.555125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.981 [2024-12-06 18:03:39.555130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.981 [2024-12-06 18:03:39.555135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.981 [2024-12-06 18:03:39.555146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.981 qpair failed and we were unable to recover it. 00:26:51.981 [2024-12-06 18:03:39.565075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.981 [2024-12-06 18:03:39.565123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.981 [2024-12-06 18:03:39.565133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.981 [2024-12-06 18:03:39.565139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.981 [2024-12-06 18:03:39.565144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.981 [2024-12-06 18:03:39.565157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.981 qpair failed and we were unable to recover it. 00:26:51.981 [2024-12-06 18:03:39.574974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.981 [2024-12-06 18:03:39.575014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.981 [2024-12-06 18:03:39.575024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.981 [2024-12-06 18:03:39.575029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.981 [2024-12-06 18:03:39.575034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.981 [2024-12-06 18:03:39.575045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.981 qpair failed and we were unable to recover it. 00:26:51.981 [2024-12-06 18:03:39.585098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.981 [2024-12-06 18:03:39.585145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.981 [2024-12-06 18:03:39.585155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.981 [2024-12-06 18:03:39.585160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.981 [2024-12-06 18:03:39.585165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.981 [2024-12-06 18:03:39.585175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.595177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.595219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.595229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.595234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.595239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.595249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.605168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.605212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.605222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.605227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.605231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.605242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.615191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.615236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.615245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.615251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.615255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.615266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.625211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.625249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.625259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.625264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.625269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.625280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.635260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.635302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.635311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.635316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.635321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.635332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.645283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.645325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.645334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.645339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.645344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.645354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.655289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.655331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.655343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.655348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.655353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.655363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.665344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.665384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.665394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.665399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.665404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.665414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.675342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.675384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.675393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.675398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.675403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.675413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.685302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.685344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.685353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.685358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.685363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.685373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.695378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.695421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.695430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.695435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.695440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.695454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.705431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.705486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.705496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.705501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.705505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.705515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.982 qpair failed and we were unable to recover it. 00:26:51.982 [2024-12-06 18:03:39.715426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.982 [2024-12-06 18:03:39.715467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.982 [2024-12-06 18:03:39.715476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.982 [2024-12-06 18:03:39.715482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.982 [2024-12-06 18:03:39.715486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.982 [2024-12-06 18:03:39.715496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.725504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.725551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.725561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.725566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.725570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.725581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.735485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.735547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.735557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.735562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.735567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.735577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.745546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.745625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.745634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.745640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.745644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.745654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.755585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.755671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.755681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.755686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.755691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.755701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.765592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.765637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.765646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.765651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.765656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.765666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.775620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.775659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.775668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.775673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.775678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.775688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.785651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.785688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.785699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.785704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.785709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.785719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.795672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.795713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.795723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.795728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.795733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.795743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:51.983 [2024-12-06 18:03:39.805718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.983 [2024-12-06 18:03:39.805803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.983 [2024-12-06 18:03:39.805813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.983 [2024-12-06 18:03:39.805819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.983 [2024-12-06 18:03:39.805823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:51.983 [2024-12-06 18:03:39.805833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:51.983 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.815726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.815764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.815773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.815779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.815783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.815793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.825748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.825790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.825800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.825805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.825813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.825823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.835782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.835822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.835832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.835837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.835841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.835851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.845809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.845848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.845857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.845863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.845868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.845877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.855817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.855876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.855886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.855891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.855896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.855906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.865724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.865803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.865812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.865817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.865822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.865832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.875898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.875939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.875948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.875954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.875958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.875969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.885928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.885972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.885981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.885986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.885991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.886001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.895814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.895854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.243 [2024-12-06 18:03:39.895864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.243 [2024-12-06 18:03:39.895869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.243 [2024-12-06 18:03:39.895874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.243 [2024-12-06 18:03:39.895884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.243 qpair failed and we were unable to recover it. 00:26:52.243 [2024-12-06 18:03:39.905971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.243 [2024-12-06 18:03:39.906012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.906021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.906026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.906031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.906041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.915994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.916035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.916047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.916052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.916057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.916067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.926004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.926047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.926056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.926061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.926066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.926077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.936023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.936061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.936071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.936076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.936081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.936091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.946080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.946165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.946175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.946180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.946185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.946195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.956125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.956169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.956180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.956192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.956198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.956209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.966151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.966198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.966207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.966213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.966218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.966228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.976157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.976202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.976211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.976216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.976221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.976232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.986186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.986243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.986253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.986258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.986263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.986273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:39.996222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:39.996265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:39.996274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:39.996280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:39.996285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:39.996295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:40.006286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:40.006333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:40.006345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:40.006351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:40.006356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:40.006368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:40.016345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:40.016401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:40.016411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:40.016417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:40.016422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:40.016432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:40.026325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:40.026427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.244 [2024-12-06 18:03:40.026438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.244 [2024-12-06 18:03:40.026443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.244 [2024-12-06 18:03:40.026448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.244 [2024-12-06 18:03:40.026458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.244 qpair failed and we were unable to recover it. 00:26:52.244 [2024-12-06 18:03:40.036358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.244 [2024-12-06 18:03:40.036402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.245 [2024-12-06 18:03:40.036411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.245 [2024-12-06 18:03:40.036417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.245 [2024-12-06 18:03:40.036422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.245 [2024-12-06 18:03:40.036432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.245 qpair failed and we were unable to recover it. 00:26:52.245 [2024-12-06 18:03:40.046407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.245 [2024-12-06 18:03:40.046494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.245 [2024-12-06 18:03:40.046503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.245 [2024-12-06 18:03:40.046509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.245 [2024-12-06 18:03:40.046514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.245 [2024-12-06 18:03:40.046524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.245 qpair failed and we were unable to recover it. 00:26:52.245 [2024-12-06 18:03:40.056377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.245 [2024-12-06 18:03:40.056419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.245 [2024-12-06 18:03:40.056428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.245 [2024-12-06 18:03:40.056434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.245 [2024-12-06 18:03:40.056439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.245 [2024-12-06 18:03:40.056449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.245 qpair failed and we were unable to recover it. 00:26:52.245 [2024-12-06 18:03:40.066416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.245 [2024-12-06 18:03:40.066455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.245 [2024-12-06 18:03:40.066464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.245 [2024-12-06 18:03:40.066469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.245 [2024-12-06 18:03:40.066474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff23c000b90 00:26:52.245 [2024-12-06 18:03:40.066485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.245 qpair failed and we were unable to recover it. 00:26:52.505 [2024-12-06 18:03:40.076406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.505 [2024-12-06 18:03:40.076455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.505 [2024-12-06 18:03:40.076475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.505 [2024-12-06 18:03:40.076481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.505 [2024-12-06 18:03:40.076487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff248000b90 00:26:52.505 [2024-12-06 18:03:40.076501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-12-06 18:03:40.086360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.505 [2024-12-06 18:03:40.086403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.505 [2024-12-06 18:03:40.086415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.505 [2024-12-06 18:03:40.086424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.505 [2024-12-06 18:03:40.086429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff248000b90 00:26:52.505 [2024-12-06 18:03:40.086440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-12-06 18:03:40.086829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132a030 is same with the state(6) to be set 00:26:52.505 [2024-12-06 18:03:40.096499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.505 [2024-12-06 18:03:40.096558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.505 [2024-12-06 18:03:40.096585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.505 [2024-12-06 18:03:40.096595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.505 [2024-12-06 18:03:40.096603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x132d490 00:26:52.505 [2024-12-06 18:03:40.096623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.505 qpair failed and we were unable to recover it. 00:26:52.505 [2024-12-06 18:03:40.106386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.506 [2024-12-06 18:03:40.106432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.506 [2024-12-06 18:03:40.106447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.506 [2024-12-06 18:03:40.106455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.506 [2024-12-06 18:03:40.106462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x132d490 00:26:52.506 [2024-12-06 18:03:40.106477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-12-06 18:03:40.116543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.506 [2024-12-06 18:03:40.116592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.506 [2024-12-06 18:03:40.116611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.506 [2024-12-06 18:03:40.116618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.506 [2024-12-06 18:03:40.116623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:26:52.506 [2024-12-06 18:03:40.116637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-12-06 18:03:40.126582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.506 [2024-12-06 18:03:40.126626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.506 [2024-12-06 18:03:40.126637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.506 [2024-12-06 18:03:40.126642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.506 [2024-12-06 18:03:40.126650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff240000b90 00:26:52.506 [2024-12-06 18:03:40.126661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.506 qpair failed and we were unable to recover it. 00:26:52.506 [2024-12-06 18:03:40.127017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132a030 (9): Bad file descriptor 00:26:52.506 Initializing NVMe Controllers 00:26:52.506 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:52.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:52.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:52.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:52.506 Initialization complete. Launching workers. 00:26:52.506 Starting thread on core 1 00:26:52.506 Starting thread on core 2 00:26:52.506 Starting thread on core 3 00:26:52.506 Starting thread on core 0 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:52.506 00:26:52.506 real 0m11.303s 00:26:52.506 user 0m21.442s 00:26:52.506 sys 0m3.553s 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.506 ************************************ 00:26:52.506 END TEST nvmf_target_disconnect_tc2 00:26:52.506 ************************************ 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:52.506 rmmod nvme_tcp 00:26:52.506 rmmod nvme_fabrics 00:26:52.506 rmmod nvme_keyring 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3216735 ']' 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3216735 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3216735 ']' 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3216735 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3216735 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3216735' 00:26:52.506 killing process with pid 3216735 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3216735 00:26:52.506 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3216735 00:26:52.765 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.766 18:03:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.675 18:03:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.675 00:26:54.675 real 0m19.304s 00:26:54.675 user 0m48.510s 00:26:54.675 sys 0m8.030s 00:26:54.675 18:03:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.675 18:03:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:54.675 ************************************ 00:26:54.675 END TEST nvmf_target_disconnect 00:26:54.675 ************************************ 00:26:54.675 18:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:54.675 00:26:54.675 real 5m32.559s 00:26:54.675 user 10m17.450s 00:26:54.675 sys 1m40.890s 00:26:54.675 18:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.675 18:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.675 ************************************ 00:26:54.675 END TEST nvmf_host 00:26:54.675 ************************************ 00:26:54.675 18:03:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:54.675 18:03:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:54.675 18:03:42 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:54.675 18:03:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:54.675 18:03:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.675 18:03:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.934 ************************************ 00:26:54.934 START TEST nvmf_target_core_interrupt_mode 00:26:54.934 ************************************ 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:54.934 * Looking for test storage... 00:26:54.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.934 --rc genhtml_branch_coverage=1 00:26:54.934 --rc genhtml_function_coverage=1 00:26:54.934 --rc genhtml_legend=1 00:26:54.934 --rc geninfo_all_blocks=1 00:26:54.934 --rc geninfo_unexecuted_blocks=1 00:26:54.934 00:26:54.934 ' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.934 --rc genhtml_branch_coverage=1 00:26:54.934 --rc genhtml_function_coverage=1 00:26:54.934 --rc genhtml_legend=1 00:26:54.934 --rc geninfo_all_blocks=1 00:26:54.934 --rc geninfo_unexecuted_blocks=1 00:26:54.934 00:26:54.934 ' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.934 --rc genhtml_branch_coverage=1 00:26:54.934 --rc genhtml_function_coverage=1 00:26:54.934 --rc genhtml_legend=1 00:26:54.934 --rc geninfo_all_blocks=1 00:26:54.934 --rc geninfo_unexecuted_blocks=1 00:26:54.934 00:26:54.934 ' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:54.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.934 --rc genhtml_branch_coverage=1 00:26:54.934 --rc genhtml_function_coverage=1 00:26:54.934 --rc genhtml_legend=1 00:26:54.934 --rc geninfo_all_blocks=1 00:26:54.934 --rc geninfo_unexecuted_blocks=1 00:26:54.934 00:26:54.934 ' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.934 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:54.935 ************************************ 00:26:54.935 START TEST nvmf_abort 00:26:54.935 ************************************ 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:54.935 * Looking for test storage... 00:26:54.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:26:54.935 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:55.196 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:55.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.197 --rc genhtml_branch_coverage=1 00:26:55.197 --rc genhtml_function_coverage=1 00:26:55.197 --rc genhtml_legend=1 00:26:55.197 --rc geninfo_all_blocks=1 00:26:55.197 --rc geninfo_unexecuted_blocks=1 00:26:55.197 00:26:55.197 ' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:55.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.197 --rc genhtml_branch_coverage=1 00:26:55.197 --rc genhtml_function_coverage=1 00:26:55.197 --rc genhtml_legend=1 00:26:55.197 --rc geninfo_all_blocks=1 00:26:55.197 --rc geninfo_unexecuted_blocks=1 00:26:55.197 00:26:55.197 ' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:55.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.197 --rc genhtml_branch_coverage=1 00:26:55.197 --rc genhtml_function_coverage=1 00:26:55.197 --rc genhtml_legend=1 00:26:55.197 --rc geninfo_all_blocks=1 00:26:55.197 --rc geninfo_unexecuted_blocks=1 00:26:55.197 00:26:55.197 ' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:55.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.197 --rc genhtml_branch_coverage=1 00:26:55.197 --rc genhtml_function_coverage=1 00:26:55.197 --rc genhtml_legend=1 00:26:55.197 --rc geninfo_all_blocks=1 00:26:55.197 --rc geninfo_unexecuted_blocks=1 00:26:55.197 00:26:55.197 ' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:55.197 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.198 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.198 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.198 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:55.198 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:55.198 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:55.198 18:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:00.479 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:00.480 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:00.480 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:00.480 Found net devices under 0000:31:00.0: cvl_0_0 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:00.480 Found net devices under 0000:31:00.1: cvl_0_1 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.480 18:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.749 ms 00:27:00.480 00:27:00.480 --- 10.0.0.2 ping statistics --- 00:27:00.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.480 rtt min/avg/max/mdev = 0.749/0.749/0.749/0.000 ms 00:27:00.480 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:27:00.481 00:27:00.481 --- 10.0.0.1 ping statistics --- 00:27:00.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.481 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3222776 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3222776 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3222776 ']' 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:00.481 18:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:00.481 [2024-12-06 18:03:48.275161] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:00.481 [2024-12-06 18:03:48.276266] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:27:00.481 [2024-12-06 18:03:48.276313] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.741 [2024-12-06 18:03:48.366513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:00.741 [2024-12-06 18:03:48.403305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.741 [2024-12-06 18:03:48.403338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.741 [2024-12-06 18:03:48.403346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.741 [2024-12-06 18:03:48.403353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.741 [2024-12-06 18:03:48.403359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.741 [2024-12-06 18:03:48.404872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.741 [2024-12-06 18:03:48.405023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.741 [2024-12-06 18:03:48.405024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.741 [2024-12-06 18:03:48.461041] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:00.741 [2024-12-06 18:03:48.462012] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:00.741 [2024-12-06 18:03:48.462358] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:00.741 [2024-12-06 18:03:48.462372] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.311 [2024-12-06 18:03:49.085791] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.311 Malloc0 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.311 Delay0 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.311 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.571 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.572 [2024-12-06 18:03:49.153637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.572 18:03:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:01.572 [2024-12-06 18:03:49.255313] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:04.110 Initializing NVMe Controllers 00:27:04.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:04.110 controller IO queue size 128 less than required 00:27:04.110 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:04.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:04.110 Initialization complete. Launching workers. 00:27:04.110 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29078 00:27:04.110 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29135, failed to submit 66 00:27:04.110 success 29078, unsuccessful 57, failed 0 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.110 rmmod nvme_tcp 00:27:04.110 rmmod nvme_fabrics 00:27:04.110 rmmod nvme_keyring 00:27:04.110 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3222776 ']' 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3222776 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3222776 ']' 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3222776 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3222776 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3222776' 00:27:04.111 killing process with pid 3222776 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3222776 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3222776 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.111 18:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:06.016 00:27:06.016 real 0m11.071s 00:27:06.016 user 0m10.360s 00:27:06.016 sys 0m5.198s 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:06.016 ************************************ 00:27:06.016 END TEST nvmf_abort 00:27:06.016 ************************************ 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:06.016 ************************************ 00:27:06.016 START TEST nvmf_ns_hotplug_stress 00:27:06.016 ************************************ 00:27:06.016 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:06.276 * Looking for test storage... 00:27:06.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:06.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.276 --rc genhtml_branch_coverage=1 00:27:06.276 --rc genhtml_function_coverage=1 00:27:06.276 --rc genhtml_legend=1 00:27:06.276 --rc geninfo_all_blocks=1 00:27:06.276 --rc geninfo_unexecuted_blocks=1 00:27:06.276 00:27:06.276 ' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:06.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.276 --rc genhtml_branch_coverage=1 00:27:06.276 --rc genhtml_function_coverage=1 00:27:06.276 --rc genhtml_legend=1 00:27:06.276 --rc geninfo_all_blocks=1 00:27:06.276 --rc geninfo_unexecuted_blocks=1 00:27:06.276 00:27:06.276 ' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:06.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.276 --rc genhtml_branch_coverage=1 00:27:06.276 --rc genhtml_function_coverage=1 00:27:06.276 --rc genhtml_legend=1 00:27:06.276 --rc geninfo_all_blocks=1 00:27:06.276 --rc geninfo_unexecuted_blocks=1 00:27:06.276 00:27:06.276 ' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:06.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.276 --rc genhtml_branch_coverage=1 00:27:06.276 --rc genhtml_function_coverage=1 00:27:06.276 --rc genhtml_legend=1 00:27:06.276 --rc geninfo_all_blocks=1 00:27:06.276 --rc geninfo_unexecuted_blocks=1 00:27:06.276 00:27:06.276 ' 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.276 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:06.277 18:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.551 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:11.552 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:11.552 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:11.552 Found net devices under 0000:31:00.0: cvl_0_0 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:11.552 Found net devices under 0000:31:00.1: cvl_0_1 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:11.552 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.811 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.811 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.811 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:11.811 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:11.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:27:11.811 00:27:11.811 --- 10.0.0.2 ping statistics --- 00:27:11.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.811 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:27:11.811 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:27:11.811 00:27:11.811 --- 10.0.0.1 ping statistics --- 00:27:11.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.812 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3227837 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3227837 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3227837 ']' 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:11.812 18:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:11.812 [2024-12-06 18:03:59.545370] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:11.812 [2024-12-06 18:03:59.546537] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:27:11.812 [2024-12-06 18:03:59.546587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.071 [2024-12-06 18:03:59.641052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:12.071 [2024-12-06 18:03:59.692189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.071 [2024-12-06 18:03:59.692243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.071 [2024-12-06 18:03:59.692254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.071 [2024-12-06 18:03:59.692262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.071 [2024-12-06 18:03:59.692268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.071 [2024-12-06 18:03:59.694179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.071 [2024-12-06 18:03:59.694350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.071 [2024-12-06 18:03:59.694350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.071 [2024-12-06 18:03:59.772029] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:12.071 [2024-12-06 18:03:59.772881] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:12.071 [2024-12-06 18:03:59.772997] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:12.071 [2024-12-06 18:03:59.773281] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:12.640 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:12.898 [2024-12-06 18:04:00.535358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.898 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:13.157 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.157 [2024-12-06 18:04:00.895962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.157 18:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:13.416 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:13.416 Malloc0 00:27:13.416 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:13.675 Delay0 00:27:13.675 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.934 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:13.934 NULL1 00:27:13.934 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:14.193 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3228214 00:27:14.194 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:14.194 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:14.194 18:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.454 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.454 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:14.454 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:14.714 true 00:27:14.714 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:14.714 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.976 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.976 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:14.976 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:15.235 true 00:27:15.235 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:15.235 18:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.235 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.494 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:15.494 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:15.754 true 00:27:15.754 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:15.754 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.754 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.013 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:16.013 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:16.013 true 00:27:16.013 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:16.013 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.273 18:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.580 18:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:16.580 18:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:16.580 true 00:27:16.580 18:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:16.580 18:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.555 Read completed with error (sct=0, sc=11) 00:27:17.556 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.815 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:17.815 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:17.815 true 00:27:17.815 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:17.815 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.073 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.333 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:18.333 18:04:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:18.333 true 00:27:18.333 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:18.333 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.592 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.592 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:18.592 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:18.852 true 00:27:18.853 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:18.853 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.114 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.114 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:19.114 18:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:19.374 true 00:27:19.374 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:19.374 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.633 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.633 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:19.633 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:19.892 true 00:27:19.893 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:19.893 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.893 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.153 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:20.153 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:20.153 true 00:27:20.414 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:20.414 18:04:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.414 18:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.673 18:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:20.673 18:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:20.673 true 00:27:20.673 18:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:20.673 18:04:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.611 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.871 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:21.871 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:21.871 true 00:27:21.871 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:21.871 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.132 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.392 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:22.392 18:04:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:22.392 true 00:27:22.392 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:22.392 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.652 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.652 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:22.652 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:22.912 true 00:27:22.912 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:22.912 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.171 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.171 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:23.171 18:04:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:23.451 true 00:27:23.451 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:23.451 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.451 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.710 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:23.710 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:23.968 true 00:27:23.968 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:23.968 18:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.904 18:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.904 18:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:24.904 18:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:25.163 true 00:27:25.163 18:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:25.163 18:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.421 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.421 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:25.421 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:25.678 true 00:27:25.678 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:25.678 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.678 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.936 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:25.936 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:26.194 true 00:27:26.195 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:26.195 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.195 18:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.452 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:26.453 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:26.453 true 00:27:26.453 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:26.453 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.711 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.969 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:26.969 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:26.969 true 00:27:26.969 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:26.969 18:04:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.342 18:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.342 18:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:28.342 18:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:28.342 true 00:27:28.342 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:28.342 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.601 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.601 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:28.601 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:28.859 true 00:27:28.859 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:28.859 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.859 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.119 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:29.119 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:29.378 true 00:27:29.378 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:29.378 18:04:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.378 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.637 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:29.637 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:29.637 true 00:27:29.637 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:29.637 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.896 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.154 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:30.154 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:30.154 true 00:27:30.154 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:30.154 18:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.412 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.671 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:30.671 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:30.671 true 00:27:30.671 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:30.671 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.930 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.930 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:30.930 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:31.190 true 00:27:31.190 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:31.190 18:04:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:32.128 18:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:32.128 18:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:27:32.128 18:04:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:27:32.387 true 00:27:32.387 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:32.387 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.646 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.646 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:27:32.646 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:27:32.906 true 00:27:32.906 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:32.906 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:33.165 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.165 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:27:33.165 18:04:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:27:33.434 true 00:27:33.434 18:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:33.434 18:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.373 18:04:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.373 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:27:34.373 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:27:34.631 true 00:27:34.631 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:34.631 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.631 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.889 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:27:34.889 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:27:34.889 true 00:27:35.147 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:35.147 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.147 18:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.405 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:27:35.405 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:27:35.405 true 00:27:35.405 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:35.405 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.663 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.923 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:27:35.923 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:27:35.923 true 00:27:35.923 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:35.923 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.181 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.181 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:27:36.182 18:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:27:36.441 true 00:27:36.441 18:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:36.441 18:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.378 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.636 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:27:37.636 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:27:37.636 true 00:27:37.636 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:37.636 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.895 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.154 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:27:38.154 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:27:38.154 true 00:27:38.154 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:38.154 18:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.413 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.674 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:27:38.674 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:27:38.674 true 00:27:38.674 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:38.674 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.934 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.934 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:27:38.934 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:27:39.192 true 00:27:39.192 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:39.192 18:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.450 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.450 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:27:39.450 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:27:39.710 true 00:27:39.710 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:39.710 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.710 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:39.969 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:27:39.969 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:27:40.227 true 00:27:40.227 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:40.227 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.227 18:04:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.486 18:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:27:40.486 18:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:27:40.486 true 00:27:40.486 18:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:40.486 18:04:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.421 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:41.681 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:27:41.681 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:27:41.941 true 00:27:41.941 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:41.941 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:41.941 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.200 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:27:42.200 18:04:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:27:42.200 true 00:27:42.459 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:42.459 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.459 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.717 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:27:42.717 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:27:42.717 true 00:27:42.717 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:42.717 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.976 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.236 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:27:43.236 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:27:43.236 true 00:27:43.236 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:43.236 18:04:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.494 18:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.495 18:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:27:43.495 18:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:27:43.753 true 00:27:43.753 18:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:43.753 18:04:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:44.693 Initializing NVMe Controllers 00:27:44.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:44.693 Controller IO queue size 128, less than required. 00:27:44.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.693 Controller IO queue size 128, less than required. 00:27:44.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:44.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:44.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:44.693 Initialization complete. Launching workers. 00:27:44.693 ======================================================== 00:27:44.693 Latency(us) 00:27:44.693 Device Information : IOPS MiB/s Average min max 00:27:44.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 219.77 0.11 184514.66 2138.72 1052486.23 00:27:44.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9876.50 4.82 12916.99 1217.44 400433.44 00:27:44.693 ======================================================== 00:27:44.693 Total : 10096.27 4.93 16652.18 1217.44 1052486.23 00:27:44.693 00:27:44.693 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.952 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:27:44.952 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:27:44.952 true 00:27:44.952 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3228214 00:27:44.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3228214) - No such process 00:27:44.952 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3228214 00:27:44.952 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.211 18:04:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:45.211 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:45.211 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:45.211 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:45.211 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:45.211 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:45.470 null0 00:27:45.470 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:45.470 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:45.470 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:45.729 null1 00:27:45.729 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:45.729 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:45.729 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:45.729 null2 00:27:45.729 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:45.729 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:45.729 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:45.988 null3 00:27:45.988 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:45.988 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:45.988 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:45.988 null4 00:27:45.988 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:45.988 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:45.988 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:46.247 null5 00:27:46.247 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.247 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.247 18:04:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:46.505 null6 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:46.505 null7 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:46.505 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3235445 3235447 3235449 3235451 3235453 3235455 3235457 3235460 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.506 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:46.766 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:47.025 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:47.285 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.545 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.805 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.806 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:47.806 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:47.806 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.806 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.806 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.066 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.326 18:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:48.326 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.585 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.586 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:48.846 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:49.106 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:49.365 18:04:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.365 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.623 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.882 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.141 rmmod nvme_tcp 00:27:50.141 rmmod nvme_fabrics 00:27:50.141 rmmod nvme_keyring 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3227837 ']' 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3227837 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3227837 ']' 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3227837 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227837 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227837' 00:27:50.141 killing process with pid 3227837 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3227837 00:27:50.141 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3227837 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:50.401 18:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.401 18:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.401 18:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.401 18:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.401 18:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:52.306 00:27:52.306 real 0m46.244s 00:27:52.306 user 2m56.888s 00:27:52.306 sys 0m17.692s 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:52.306 ************************************ 00:27:52.306 END TEST nvmf_ns_hotplug_stress 00:27:52.306 ************************************ 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:52.306 ************************************ 00:27:52.306 START TEST nvmf_delete_subsystem 00:27:52.306 ************************************ 00:27:52.306 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:52.568 * Looking for test storage... 00:27:52.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.568 --rc genhtml_branch_coverage=1 00:27:52.568 --rc genhtml_function_coverage=1 00:27:52.568 --rc genhtml_legend=1 00:27:52.568 --rc geninfo_all_blocks=1 00:27:52.568 --rc geninfo_unexecuted_blocks=1 00:27:52.568 00:27:52.568 ' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.568 --rc genhtml_branch_coverage=1 00:27:52.568 --rc genhtml_function_coverage=1 00:27:52.568 --rc genhtml_legend=1 00:27:52.568 --rc geninfo_all_blocks=1 00:27:52.568 --rc geninfo_unexecuted_blocks=1 00:27:52.568 00:27:52.568 ' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.568 --rc genhtml_branch_coverage=1 00:27:52.568 --rc genhtml_function_coverage=1 00:27:52.568 --rc genhtml_legend=1 00:27:52.568 --rc geninfo_all_blocks=1 00:27:52.568 --rc geninfo_unexecuted_blocks=1 00:27:52.568 00:27:52.568 ' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:52.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:52.568 --rc genhtml_branch_coverage=1 00:27:52.568 --rc genhtml_function_coverage=1 00:27:52.568 --rc genhtml_legend=1 00:27:52.568 --rc geninfo_all_blocks=1 00:27:52.568 --rc geninfo_unexecuted_blocks=1 00:27:52.568 00:27:52.568 ' 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:52.568 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:52.569 18:04:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:57.840 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:57.841 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:57.841 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:57.841 Found net devices under 0000:31:00.0: cvl_0_0 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:57.841 Found net devices under 0000:31:00.1: cvl_0_1 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:57.841 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:57.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:57.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.557 ms 00:27:57.842 00:27:57.842 --- 10.0.0.2 ping statistics --- 00:27:57.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.842 rtt min/avg/max/mdev = 0.557/0.557/0.557/0.000 ms 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:57.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:57.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:27:57.842 00:27:57.842 --- 10.0.0.1 ping statistics --- 00:27:57.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:57.842 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3240815 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3240815 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3240815 ']' 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:57.842 18:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:57.842 [2024-12-06 18:04:45.439955] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:57.842 [2024-12-06 18:04:45.440965] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:27:57.842 [2024-12-06 18:04:45.441003] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:57.842 [2024-12-06 18:04:45.528171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:57.842 [2024-12-06 18:04:45.574206] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:57.842 [2024-12-06 18:04:45.574255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:57.842 [2024-12-06 18:04:45.574265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:57.842 [2024-12-06 18:04:45.574273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:57.842 [2024-12-06 18:04:45.574280] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:57.842 [2024-12-06 18:04:45.575841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.842 [2024-12-06 18:04:45.575847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.842 [2024-12-06 18:04:45.646150] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:57.842 [2024-12-06 18:04:45.646787] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:57.842 [2024-12-06 18:04:45.646927] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:58.408 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:58.408 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:27:58.408 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:58.408 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:58.408 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 [2024-12-06 18:04:46.248775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 [2024-12-06 18:04:46.268794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 NULL1 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 Delay0 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3240852 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:58.667 18:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:58.667 [2024-12-06 18:04:46.337597] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:00.573 18:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.573 18:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.573 18:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.833 Read completed with error (sct=0, sc=8) 00:28:00.833 starting I/O failed: -6 00:28:00.833 Read completed with error (sct=0, sc=8) 00:28:00.833 Read completed with error (sct=0, sc=8) 00:28:00.833 Write completed with error (sct=0, sc=8) 00:28:00.833 Read completed with error (sct=0, sc=8) 00:28:00.833 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 [2024-12-06 18:04:48.632042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d2c0 is same with the state(6) to be set 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 [2024-12-06 18:04:48.632564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94cf00 is same with the state(6) to be set 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 starting I/O failed: -6 00:28:00.834 Write completed with error (sct=0, sc=8) 00:28:00.834 Read completed with error (sct=0, sc=8) 00:28:00.835 [2024-12-06 18:04:48.635586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65f800d020 is same with the state(6) to be set 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Write completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:00.835 Read completed with error (sct=0, sc=8) 00:28:02.212 [2024-12-06 18:04:49.600208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94e5f0 is same with the state(6) to be set 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 [2024-12-06 18:04:49.635250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d0e0 is same with the state(6) to be set 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 [2024-12-06 18:04:49.635629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94d4a0 is same with the state(6) to be set 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Read completed with error (sct=0, sc=8) 00:28:02.212 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 [2024-12-06 18:04:49.637676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65f8000c40 is same with the state(6) to be set 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Write completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 Read completed with error (sct=0, sc=8) 00:28:02.213 [2024-12-06 18:04:49.638039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f65f800d350 is same with the state(6) to be set 00:28:02.213 Initializing NVMe Controllers 00:28:02.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.213 Controller IO queue size 128, less than required. 00:28:02.213 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:02.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:02.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:02.213 Initialization complete. Launching workers. 00:28:02.213 ======================================================== 00:28:02.213 Latency(us) 00:28:02.213 Device Information : IOPS MiB/s Average min max 00:28:02.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.92 0.08 908819.58 542.86 1005939.41 00:28:02.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.90 0.08 944216.39 229.48 2001222.63 00:28:02.213 ======================================================== 00:28:02.213 Total : 331.82 0.16 926730.58 229.48 2001222.63 00:28:02.213 00:28:02.213 [2024-12-06 18:04:49.638498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94e5f0 (9): Bad file descriptor 00:28:02.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:02.213 18:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.213 18:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:02.213 18:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3240852 00:28:02.213 18:04:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3240852 00:28:02.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3240852) - No such process 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3240852 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3240852 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3240852 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:02.472 [2024-12-06 18:04:50.160966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3241843 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:02.472 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:02.472 [2024-12-06 18:04:50.213153] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:03.040 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:03.040 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:03.040 18:04:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:03.608 18:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:03.608 18:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:03.608 18:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:03.905 18:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:03.905 18:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:03.905 18:04:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:04.567 18:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:04.567 18:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:04.567 18:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:05.134 18:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:05.134 18:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:05.134 18:04:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:05.393 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:05.393 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:05.393 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:05.653 Initializing NVMe Controllers 00:28:05.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:05.653 Controller IO queue size 128, less than required. 00:28:05.653 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:05.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:05.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:05.653 Initialization complete. Launching workers. 00:28:05.653 ======================================================== 00:28:05.653 Latency(us) 00:28:05.653 Device Information : IOPS MiB/s Average min max 00:28:05.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003678.75 1000236.42 1009090.80 00:28:05.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003277.90 1000186.84 1043046.96 00:28:05.653 ======================================================== 00:28:05.653 Total : 256.00 0.12 1003478.32 1000186.84 1043046.96 00:28:05.653 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3241843 00:28:05.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3241843) - No such process 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3241843 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.911 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.911 rmmod nvme_tcp 00:28:05.911 rmmod nvme_fabrics 00:28:06.170 rmmod nvme_keyring 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3240815 ']' 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3240815 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3240815 ']' 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3240815 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3240815 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3240815' 00:28:06.170 killing process with pid 3240815 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3240815 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3240815 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.170 18:04:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.699 00:28:08.699 real 0m15.876s 00:28:08.699 user 0m25.964s 00:28:08.699 sys 0m5.598s 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:08.699 ************************************ 00:28:08.699 END TEST nvmf_delete_subsystem 00:28:08.699 ************************************ 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:08.699 18:04:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:08.699 ************************************ 00:28:08.699 START TEST nvmf_host_management 00:28:08.699 ************************************ 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:08.699 * Looking for test storage... 00:28:08.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:08.699 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.700 --rc genhtml_branch_coverage=1 00:28:08.700 --rc genhtml_function_coverage=1 00:28:08.700 --rc genhtml_legend=1 00:28:08.700 --rc geninfo_all_blocks=1 00:28:08.700 --rc geninfo_unexecuted_blocks=1 00:28:08.700 00:28:08.700 ' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.700 --rc genhtml_branch_coverage=1 00:28:08.700 --rc genhtml_function_coverage=1 00:28:08.700 --rc genhtml_legend=1 00:28:08.700 --rc geninfo_all_blocks=1 00:28:08.700 --rc geninfo_unexecuted_blocks=1 00:28:08.700 00:28:08.700 ' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.700 --rc genhtml_branch_coverage=1 00:28:08.700 --rc genhtml_function_coverage=1 00:28:08.700 --rc genhtml_legend=1 00:28:08.700 --rc geninfo_all_blocks=1 00:28:08.700 --rc geninfo_unexecuted_blocks=1 00:28:08.700 00:28:08.700 ' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:08.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:08.700 --rc genhtml_branch_coverage=1 00:28:08.700 --rc genhtml_function_coverage=1 00:28:08.700 --rc genhtml_legend=1 00:28:08.700 --rc geninfo_all_blocks=1 00:28:08.700 --rc geninfo_unexecuted_blocks=1 00:28:08.700 00:28:08.700 ' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:08.700 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.701 18:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.975 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:13.976 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:13.976 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:13.976 Found net devices under 0000:31:00.0: cvl_0_0 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:13.976 Found net devices under 0000:31:00.1: cvl_0_1 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:13.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:28:13.976 00:28:13.976 --- 10.0.0.2 ping statistics --- 00:28:13.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.976 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:28:13.976 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:28:13.977 00:28:13.977 --- 10.0.0.1 ping statistics --- 00:28:13.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.977 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3246863 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3246863 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3246863 ']' 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:13.977 18:05:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:13.977 [2024-12-06 18:05:01.474488] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:13.977 [2024-12-06 18:05:01.475504] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:28:13.977 [2024-12-06 18:05:01.475542] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.977 [2024-12-06 18:05:01.546689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.977 [2024-12-06 18:05:01.576897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.977 [2024-12-06 18:05:01.576923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.977 [2024-12-06 18:05:01.576932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.977 [2024-12-06 18:05:01.576938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.977 [2024-12-06 18:05:01.576943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.977 [2024-12-06 18:05:01.578160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.977 [2024-12-06 18:05:01.578452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.977 [2024-12-06 18:05:01.578613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.977 [2024-12-06 18:05:01.578614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:13.977 [2024-12-06 18:05:01.630583] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:13.977 [2024-12-06 18:05:01.631483] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:13.977 [2024-12-06 18:05:01.631602] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:13.977 [2024-12-06 18:05:01.631691] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:13.977 [2024-12-06 18:05:01.631720] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.545 [2024-12-06 18:05:02.279348] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.545 Malloc0 00:28:14.545 [2024-12-06 18:05:02.347123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.545 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3247230 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3247230 /var/tmp/bdevperf.sock 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3247230 ']' 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:14.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:14.805 { 00:28:14.805 "params": { 00:28:14.805 "name": "Nvme$subsystem", 00:28:14.805 "trtype": "$TEST_TRANSPORT", 00:28:14.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.805 "adrfam": "ipv4", 00:28:14.805 "trsvcid": "$NVMF_PORT", 00:28:14.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.805 "hdgst": ${hdgst:-false}, 00:28:14.805 "ddgst": ${ddgst:-false} 00:28:14.805 }, 00:28:14.805 "method": "bdev_nvme_attach_controller" 00:28:14.805 } 00:28:14.805 EOF 00:28:14.805 )") 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:14.805 18:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:14.805 "params": { 00:28:14.805 "name": "Nvme0", 00:28:14.805 "trtype": "tcp", 00:28:14.805 "traddr": "10.0.0.2", 00:28:14.805 "adrfam": "ipv4", 00:28:14.805 "trsvcid": "4420", 00:28:14.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.805 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.805 "hdgst": false, 00:28:14.805 "ddgst": false 00:28:14.805 }, 00:28:14.805 "method": "bdev_nvme_attach_controller" 00:28:14.805 }' 00:28:14.805 [2024-12-06 18:05:02.419255] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:28:14.805 [2024-12-06 18:05:02.419307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247230 ] 00:28:14.805 [2024-12-06 18:05:02.498031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.805 [2024-12-06 18:05:02.534498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.064 Running I/O for 10 seconds... 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=593 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 593 -ge 100 ']' 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.633 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.633 [2024-12-06 18:05:03.254962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e0e0 is same with the state(6) to be set 00:28:15.633 [2024-12-06 18:05:03.255008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195e0e0 is same with the state(6) to be set 00:28:15.633 [2024-12-06 18:05:03.255582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.633 [2024-12-06 18:05:03.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.633 [2024-12-06 18:05:03.255632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.633 [2024-12-06 18:05:03.255640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.633 [2024-12-06 18:05:03.255649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.633 [2024-12-06 18:05:03.255657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.633 [2024-12-06 18:05:03.255665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:15.633 [2024-12-06 18:05:03.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.633 [2024-12-06 18:05:03.255681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f7b10 is same with the state(6) to be set 00:28:15.633 [2024-12-06 18:05:03.255756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.633 [2024-12-06 18:05:03.255766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.633 [2024-12-06 18:05:03.255780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.633 [2024-12-06 18:05:03.255788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.633 [2024-12-06 18:05:03.255798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.255984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.255994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.634 [2024-12-06 18:05:03.256495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.634 [2024-12-06 18:05:03.256505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.256899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.635 [2024-12-06 18:05:03.256906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.258148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:15.635 task offset: 87552 on job bdev=Nvme0n1 fails 00:28:15.635 00:28:15.635 Latency(us) 00:28:15.635 [2024-12-06T17:05:03.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.635 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:15.635 Job: Nvme0n1 ended in about 0.42 seconds with error 00:28:15.635 Verification LBA range: start 0x0 length 0x400 00:28:15.635 Nvme0n1 : 0.42 1566.85 97.93 153.33 0.00 36050.37 1706.67 35389.44 00:28:15.635 [2024-12-06T17:05:03.462Z] =================================================================================================================== 00:28:15.635 [2024-12-06T17:05:03.462Z] Total : 1566.85 97.93 153.33 0.00 36050.37 1706.67 35389.44 00:28:15.635 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.635 [2024-12-06 18:05:03.260156] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:15.635 [2024-12-06 18:05:03.260183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7b10 (9): Bad file descriptor 00:28:15.635 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:15.635 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.635 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.635 [2024-12-06 18:05:03.261450] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:15.635 [2024-12-06 18:05:03.261511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:15.635 [2024-12-06 18:05:03.261535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:15.635 [2024-12-06 18:05:03.261551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:15.635 [2024-12-06 18:05:03.261559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:15.635 [2024-12-06 18:05:03.261567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:15.635 [2024-12-06 18:05:03.261573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f7b10 00:28:15.635 [2024-12-06 18:05:03.261592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f7b10 (9): Bad file descriptor 00:28:15.635 [2024-12-06 18:05:03.261605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:15.635 [2024-12-06 18:05:03.261613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:15.635 [2024-12-06 18:05:03.261622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:15.635 [2024-12-06 18:05:03.261631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:15.635 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.635 18:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3247230 00:28:16.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3247230) - No such process 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:16.574 { 00:28:16.574 "params": { 00:28:16.574 "name": "Nvme$subsystem", 00:28:16.574 "trtype": "$TEST_TRANSPORT", 00:28:16.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.574 "adrfam": "ipv4", 00:28:16.574 "trsvcid": "$NVMF_PORT", 00:28:16.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.574 "hdgst": ${hdgst:-false}, 00:28:16.574 "ddgst": ${ddgst:-false} 00:28:16.574 }, 00:28:16.574 "method": "bdev_nvme_attach_controller" 00:28:16.574 } 00:28:16.574 EOF 00:28:16.574 )") 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:16.574 18:05:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:16.574 "params": { 00:28:16.574 "name": "Nvme0", 00:28:16.574 "trtype": "tcp", 00:28:16.574 "traddr": "10.0.0.2", 00:28:16.574 "adrfam": "ipv4", 00:28:16.574 "trsvcid": "4420", 00:28:16.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:16.574 "hdgst": false, 00:28:16.574 "ddgst": false 00:28:16.574 }, 00:28:16.574 "method": "bdev_nvme_attach_controller" 00:28:16.574 }' 00:28:16.574 [2024-12-06 18:05:04.308690] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:28:16.574 [2024-12-06 18:05:04.308748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247583 ] 00:28:16.574 [2024-12-06 18:05:04.387871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.834 [2024-12-06 18:05:04.423132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.834 Running I/O for 1 seconds... 00:28:18.211 1824.00 IOPS, 114.00 MiB/s 00:28:18.211 Latency(us) 00:28:18.211 [2024-12-06T17:05:06.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.211 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:18.211 Verification LBA range: start 0x0 length 0x400 00:28:18.211 Nvme0n1 : 1.01 1873.63 117.10 0.00 0.00 33465.09 1788.59 35170.99 00:28:18.211 [2024-12-06T17:05:06.038Z] =================================================================================================================== 00:28:18.211 [2024-12-06T17:05:06.038Z] Total : 1873.63 117.10 0.00 0.00 33465.09 1788.59 35170.99 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:18.211 rmmod nvme_tcp 00:28:18.211 rmmod nvme_fabrics 00:28:18.211 rmmod nvme_keyring 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3246863 ']' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3246863 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3246863 ']' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3246863 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246863 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246863' 00:28:18.211 killing process with pid 3246863 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3246863 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3246863 00:28:18.211 [2024-12-06 18:05:05.937591] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.211 18:05:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:20.744 00:28:20.744 real 0m11.995s 00:28:20.744 user 0m17.387s 00:28:20.744 sys 0m5.577s 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:20.744 ************************************ 00:28:20.744 END TEST nvmf_host_management 00:28:20.744 ************************************ 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:20.744 ************************************ 00:28:20.744 START TEST nvmf_lvol 00:28:20.744 ************************************ 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:20.744 * Looking for test storage... 00:28:20.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:20.744 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.745 --rc genhtml_branch_coverage=1 00:28:20.745 --rc genhtml_function_coverage=1 00:28:20.745 --rc genhtml_legend=1 00:28:20.745 --rc geninfo_all_blocks=1 00:28:20.745 --rc geninfo_unexecuted_blocks=1 00:28:20.745 00:28:20.745 ' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.745 --rc genhtml_branch_coverage=1 00:28:20.745 --rc genhtml_function_coverage=1 00:28:20.745 --rc genhtml_legend=1 00:28:20.745 --rc geninfo_all_blocks=1 00:28:20.745 --rc geninfo_unexecuted_blocks=1 00:28:20.745 00:28:20.745 ' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.745 --rc genhtml_branch_coverage=1 00:28:20.745 --rc genhtml_function_coverage=1 00:28:20.745 --rc genhtml_legend=1 00:28:20.745 --rc geninfo_all_blocks=1 00:28:20.745 --rc geninfo_unexecuted_blocks=1 00:28:20.745 00:28:20.745 ' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:20.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:20.745 --rc genhtml_branch_coverage=1 00:28:20.745 --rc genhtml_function_coverage=1 00:28:20.745 --rc genhtml_legend=1 00:28:20.745 --rc geninfo_all_blocks=1 00:28:20.745 --rc geninfo_unexecuted_blocks=1 00:28:20.745 00:28:20.745 ' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.745 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:20.746 18:05:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:26.018 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:26.018 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.018 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:26.019 Found net devices under 0000:31:00.0: cvl_0_0 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:26.019 Found net devices under 0000:31:00.1: cvl_0_1 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:28:26.019 00:28:26.019 --- 10.0.0.2 ping statistics --- 00:28:26.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.019 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:28:26.019 00:28:26.019 --- 10.0.0.1 ping statistics --- 00:28:26.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.019 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3252256 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3252256 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3252256 ']' 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:26.019 18:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:26.019 [2024-12-06 18:05:13.645526] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:26.019 [2024-12-06 18:05:13.646666] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:28:26.019 [2024-12-06 18:05:13.646719] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.019 [2024-12-06 18:05:13.742993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:26.019 [2024-12-06 18:05:13.797312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.019 [2024-12-06 18:05:13.797362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.019 [2024-12-06 18:05:13.797371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.019 [2024-12-06 18:05:13.797379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.019 [2024-12-06 18:05:13.797385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.019 [2024-12-06 18:05:13.799284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.019 [2024-12-06 18:05:13.799516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:26.019 [2024-12-06 18:05:13.799519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.276 [2024-12-06 18:05:13.871657] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:26.276 [2024-12-06 18:05:13.872714] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:26.276 [2024-12-06 18:05:13.873095] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:26.276 [2024-12-06 18:05:13.873136] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:26.842 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.842 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:28:26.842 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.842 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.842 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:26.842 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.843 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:26.843 [2024-12-06 18:05:14.600597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.843 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:27.099 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:27.099 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:27.357 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:27.357 18:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:27.357 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:27.615 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=885bf787-75f8-4608-a039-e5fd1fe6aeb3 00:28:27.615 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 885bf787-75f8-4608-a039-e5fd1fe6aeb3 lvol 20 00:28:27.873 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=742db9e3-ebcd-46df-b91b-821b30a43e28 00:28:27.873 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:27.873 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 742db9e3-ebcd-46df-b91b-821b30a43e28 00:28:28.131 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:28.131 [2024-12-06 18:05:15.940529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.391 18:05:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.391 18:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3252809 00:28:28.391 18:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:28.391 18:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:29.327 18:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 742db9e3-ebcd-46df-b91b-821b30a43e28 MY_SNAPSHOT 00:28:29.587 18:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c943da58-18e8-489d-8e0e-fdbae1243939 00:28:29.587 18:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 742db9e3-ebcd-46df-b91b-821b30a43e28 30 00:28:29.846 18:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c943da58-18e8-489d-8e0e-fdbae1243939 MY_CLONE 00:28:30.105 18:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e93ac0c6-d4ad-4e91-8ba1-921474c0f59f 00:28:30.105 18:05:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e93ac0c6-d4ad-4e91-8ba1-921474c0f59f 00:28:30.365 18:05:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3252809 00:28:40.349 Initializing NVMe Controllers 00:28:40.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:40.349 Controller IO queue size 128, less than required. 00:28:40.349 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:40.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:40.349 Initialization complete. Launching workers. 00:28:40.349 ======================================================== 00:28:40.349 Latency(us) 00:28:40.349 Device Information : IOPS MiB/s Average min max 00:28:40.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16198.00 63.27 7903.04 1647.39 44825.08 00:28:40.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16519.60 64.53 7750.40 1230.50 44733.04 00:28:40.349 ======================================================== 00:28:40.349 Total : 32717.60 127.80 7825.97 1230.50 44825.08 00:28:40.349 00:28:40.349 18:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:40.349 18:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 742db9e3-ebcd-46df-b91b-821b30a43e28 00:28:40.349 18:05:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 885bf787-75f8-4608-a039-e5fd1fe6aeb3 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.349 rmmod nvme_tcp 00:28:40.349 rmmod nvme_fabrics 00:28:40.349 rmmod nvme_keyring 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3252256 ']' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3252256 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3252256 ']' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3252256 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3252256 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3252256' 00:28:40.349 killing process with pid 3252256 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3252256 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3252256 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.349 18:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.727 00:28:41.727 real 0m21.236s 00:28:41.727 user 0m54.730s 00:28:41.727 sys 0m8.851s 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:41.727 ************************************ 00:28:41.727 END TEST nvmf_lvol 00:28:41.727 ************************************ 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:41.727 ************************************ 00:28:41.727 START TEST nvmf_lvs_grow 00:28:41.727 ************************************ 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:41.727 * Looking for test storage... 00:28:41.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:41.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.727 --rc genhtml_branch_coverage=1 00:28:41.727 --rc genhtml_function_coverage=1 00:28:41.727 --rc genhtml_legend=1 00:28:41.727 --rc geninfo_all_blocks=1 00:28:41.727 --rc geninfo_unexecuted_blocks=1 00:28:41.727 00:28:41.727 ' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:41.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.727 --rc genhtml_branch_coverage=1 00:28:41.727 --rc genhtml_function_coverage=1 00:28:41.727 --rc genhtml_legend=1 00:28:41.727 --rc geninfo_all_blocks=1 00:28:41.727 --rc geninfo_unexecuted_blocks=1 00:28:41.727 00:28:41.727 ' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:41.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.727 --rc genhtml_branch_coverage=1 00:28:41.727 --rc genhtml_function_coverage=1 00:28:41.727 --rc genhtml_legend=1 00:28:41.727 --rc geninfo_all_blocks=1 00:28:41.727 --rc geninfo_unexecuted_blocks=1 00:28:41.727 00:28:41.727 ' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:41.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.727 --rc genhtml_branch_coverage=1 00:28:41.727 --rc genhtml_function_coverage=1 00:28:41.727 --rc genhtml_legend=1 00:28:41.727 --rc geninfo_all_blocks=1 00:28:41.727 --rc geninfo_unexecuted_blocks=1 00:28:41.727 00:28:41.727 ' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.727 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.728 18:05:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:47.002 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.002 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.002 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.002 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.002 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.002 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:47.003 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:47.003 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:47.003 Found net devices under 0000:31:00.0: cvl_0_0 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:47.003 Found net devices under 0000:31:00.1: cvl_0_1 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.003 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:28:47.263 00:28:47.263 --- 10.0.0.2 ping statistics --- 00:28:47.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.263 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:28:47.263 00:28:47.263 --- 10.0.0.1 ping statistics --- 00:28:47.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.263 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:47.263 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3259617 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3259617 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3259617 ']' 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:47.264 18:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:47.264 [2024-12-06 18:05:34.935441] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:47.264 [2024-12-06 18:05:34.936427] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:28:47.264 [2024-12-06 18:05:34.936465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.264 [2024-12-06 18:05:35.006740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.264 [2024-12-06 18:05:35.035573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.264 [2024-12-06 18:05:35.035599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.264 [2024-12-06 18:05:35.035608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.264 [2024-12-06 18:05:35.035613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.264 [2024-12-06 18:05:35.035617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.264 [2024-12-06 18:05:35.036081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.264 [2024-12-06 18:05:35.087242] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:47.264 [2024-12-06 18:05:35.087421] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:47.523 [2024-12-06 18:05:35.272772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:47.523 ************************************ 00:28:47.523 START TEST lvs_grow_clean 00:28:47.523 ************************************ 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:47.523 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:47.802 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:47.802 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:48.060 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2a785f0b-2e08-4077-b1a6-729fbe46d422 00:28:48.060 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:28:48.060 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:48.060 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:48.060 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:48.060 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 lvol 150 00:28:48.318 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f2b9bfe9-1b22-430a-a836-e2b0785bfe87 00:28:48.318 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:48.318 18:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:48.318 [2024-12-06 18:05:36.076445] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:48.318 [2024-12-06 18:05:36.076580] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:48.318 true 00:28:48.318 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:48.318 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:28:48.575 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:48.575 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:48.575 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2b9bfe9-1b22-430a-a836-e2b0785bfe87 00:28:48.833 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:49.091 [2024-12-06 18:05:36.697038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.091 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:49.091 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3259996 00:28:49.091 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3259996 /var/tmp/bdevperf.sock 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3259996 ']' 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.092 18:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.092 [2024-12-06 18:05:36.897843] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:28:49.092 [2024-12-06 18:05:36.897901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259996 ] 00:28:49.349 [2024-12-06 18:05:36.977455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.349 [2024-12-06 18:05:37.013823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.917 18:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.917 18:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:28:49.917 18:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:50.176 Nvme0n1 00:28:50.176 18:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:50.436 [ 00:28:50.436 { 00:28:50.436 "name": "Nvme0n1", 00:28:50.436 "aliases": [ 00:28:50.436 "f2b9bfe9-1b22-430a-a836-e2b0785bfe87" 00:28:50.436 ], 00:28:50.436 "product_name": "NVMe disk", 00:28:50.436 "block_size": 4096, 00:28:50.436 "num_blocks": 38912, 00:28:50.436 "uuid": "f2b9bfe9-1b22-430a-a836-e2b0785bfe87", 00:28:50.436 "numa_id": 0, 00:28:50.436 "assigned_rate_limits": { 00:28:50.436 "rw_ios_per_sec": 0, 00:28:50.436 "rw_mbytes_per_sec": 0, 00:28:50.436 "r_mbytes_per_sec": 0, 00:28:50.436 "w_mbytes_per_sec": 0 00:28:50.436 }, 00:28:50.436 "claimed": false, 00:28:50.436 "zoned": false, 00:28:50.436 "supported_io_types": { 00:28:50.436 "read": true, 00:28:50.436 "write": true, 00:28:50.436 "unmap": true, 00:28:50.436 "flush": true, 00:28:50.436 "reset": true, 00:28:50.436 "nvme_admin": true, 00:28:50.436 "nvme_io": true, 00:28:50.436 "nvme_io_md": false, 00:28:50.436 "write_zeroes": true, 00:28:50.436 "zcopy": false, 00:28:50.436 "get_zone_info": false, 00:28:50.436 "zone_management": false, 00:28:50.436 "zone_append": false, 00:28:50.436 "compare": true, 00:28:50.436 "compare_and_write": true, 00:28:50.436 "abort": true, 00:28:50.436 "seek_hole": false, 00:28:50.436 "seek_data": false, 00:28:50.436 "copy": true, 00:28:50.436 "nvme_iov_md": false 00:28:50.436 }, 00:28:50.436 "memory_domains": [ 00:28:50.436 { 00:28:50.436 "dma_device_id": "system", 00:28:50.436 "dma_device_type": 1 00:28:50.436 } 00:28:50.436 ], 00:28:50.436 "driver_specific": { 00:28:50.436 "nvme": [ 00:28:50.436 { 00:28:50.436 "trid": { 00:28:50.436 "trtype": "TCP", 00:28:50.436 "adrfam": "IPv4", 00:28:50.436 "traddr": "10.0.0.2", 00:28:50.436 "trsvcid": "4420", 00:28:50.436 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:50.436 }, 00:28:50.436 "ctrlr_data": { 00:28:50.436 "cntlid": 1, 00:28:50.436 "vendor_id": "0x8086", 00:28:50.436 "model_number": "SPDK bdev Controller", 00:28:50.436 "serial_number": "SPDK0", 00:28:50.436 "firmware_revision": "25.01", 00:28:50.436 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.436 "oacs": { 00:28:50.436 "security": 0, 00:28:50.436 "format": 0, 00:28:50.436 "firmware": 0, 00:28:50.436 "ns_manage": 0 00:28:50.436 }, 00:28:50.436 "multi_ctrlr": true, 00:28:50.436 "ana_reporting": false 00:28:50.436 }, 00:28:50.436 "vs": { 00:28:50.436 "nvme_version": "1.3" 00:28:50.436 }, 00:28:50.436 "ns_data": { 00:28:50.436 "id": 1, 00:28:50.436 "can_share": true 00:28:50.436 } 00:28:50.436 } 00:28:50.436 ], 00:28:50.436 "mp_policy": "active_passive" 00:28:50.436 } 00:28:50.436 } 00:28:50.436 ] 00:28:50.436 18:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:50.436 18:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3260331 00:28:50.436 18:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:50.436 Running I/O for 10 seconds... 00:28:51.816 Latency(us) 00:28:51.816 [2024-12-06T17:05:39.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.816 Nvme0n1 : 1.00 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:28:51.816 [2024-12-06T17:05:39.643Z] =================================================================================================================== 00:28:51.816 [2024-12-06T17:05:39.643Z] Total : 17907.00 69.95 0.00 0.00 0.00 0.00 0.00 00:28:51.816 00:28:52.384 18:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:28:52.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.646 Nvme0n1 : 2.00 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:28:52.646 [2024-12-06T17:05:40.473Z] =================================================================================================================== 00:28:52.646 [2024-12-06T17:05:40.473Z] Total : 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:28:52.646 00:28:52.646 true 00:28:52.646 18:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:28:52.646 18:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:52.907 18:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:52.907 18:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:52.907 18:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3260331 00:28:53.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.476 Nvme0n1 : 3.00 18076.33 70.61 0.00 0.00 0.00 0.00 0.00 00:28:53.476 [2024-12-06T17:05:41.303Z] =================================================================================================================== 00:28:53.476 [2024-12-06T17:05:41.303Z] Total : 18076.33 70.61 0.00 0.00 0.00 0.00 0.00 00:28:53.476 00:28:54.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.416 Nvme0n1 : 4.00 19050.00 74.41 0.00 0.00 0.00 0.00 0.00 00:28:54.416 [2024-12-06T17:05:42.243Z] =================================================================================================================== 00:28:54.416 [2024-12-06T17:05:42.243Z] Total : 19050.00 74.41 0.00 0.00 0.00 0.00 0.00 00:28:54.416 00:28:55.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.797 Nvme0n1 : 5.00 20370.80 79.57 0.00 0.00 0.00 0.00 0.00 00:28:55.797 [2024-12-06T17:05:43.624Z] =================================================================================================================== 00:28:55.797 [2024-12-06T17:05:43.624Z] Total : 20370.80 79.57 0.00 0.00 0.00 0.00 0.00 00:28:55.797 00:28:56.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:56.738 Nvme0n1 : 6.00 21251.33 83.01 0.00 0.00 0.00 0.00 0.00 00:28:56.738 [2024-12-06T17:05:44.565Z] =================================================================================================================== 00:28:56.738 [2024-12-06T17:05:44.565Z] Total : 21251.33 83.01 0.00 0.00 0.00 0.00 0.00 00:28:56.738 00:28:57.674 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.674 Nvme0n1 : 7.00 21885.14 85.49 0.00 0.00 0.00 0.00 0.00 00:28:57.674 [2024-12-06T17:05:45.501Z] =================================================================================================================== 00:28:57.674 [2024-12-06T17:05:45.502Z] Total : 21885.14 85.49 0.00 0.00 0.00 0.00 0.00 00:28:57.675 00:28:58.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.614 Nvme0n1 : 8.00 22364.38 87.36 0.00 0.00 0.00 0.00 0.00 00:28:58.614 [2024-12-06T17:05:46.441Z] =================================================================================================================== 00:28:58.614 [2024-12-06T17:05:46.441Z] Total : 22364.38 87.36 0.00 0.00 0.00 0.00 0.00 00:28:58.614 00:28:59.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.584 Nvme0n1 : 9.00 22729.89 88.79 0.00 0.00 0.00 0.00 0.00 00:28:59.584 [2024-12-06T17:05:47.411Z] =================================================================================================================== 00:28:59.584 [2024-12-06T17:05:47.411Z] Total : 22729.89 88.79 0.00 0.00 0.00 0.00 0.00 00:28:59.584 00:29:00.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.564 Nvme0n1 : 10.00 23028.70 89.96 0.00 0.00 0.00 0.00 0.00 00:29:00.564 [2024-12-06T17:05:48.391Z] =================================================================================================================== 00:29:00.564 [2024-12-06T17:05:48.391Z] Total : 23028.70 89.96 0.00 0.00 0.00 0.00 0.00 00:29:00.564 00:29:00.564 00:29:00.564 Latency(us) 00:29:00.564 [2024-12-06T17:05:48.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.564 Nvme0n1 : 10.01 23034.63 89.98 0.00 0.00 5553.69 2744.32 15291.73 00:29:00.564 [2024-12-06T17:05:48.391Z] =================================================================================================================== 00:29:00.564 [2024-12-06T17:05:48.391Z] Total : 23034.63 89.98 0.00 0.00 5553.69 2744.32 15291.73 00:29:00.564 { 00:29:00.564 "results": [ 00:29:00.564 { 00:29:00.564 "job": "Nvme0n1", 00:29:00.564 "core_mask": "0x2", 00:29:00.564 "workload": "randwrite", 00:29:00.564 "status": "finished", 00:29:00.564 "queue_depth": 128, 00:29:00.564 "io_size": 4096, 00:29:00.564 "runtime": 10.005719, 00:29:00.564 "iops": 23034.626497106306, 00:29:00.564 "mibps": 89.97900975432151, 00:29:00.564 "io_failed": 0, 00:29:00.564 "io_timeout": 0, 00:29:00.564 "avg_latency_us": 5553.685505312148, 00:29:00.564 "min_latency_us": 2744.32, 00:29:00.564 "max_latency_us": 15291.733333333334 00:29:00.564 } 00:29:00.564 ], 00:29:00.564 "core_count": 1 00:29:00.564 } 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3259996 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3259996 ']' 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3259996 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259996 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259996' 00:29:00.564 killing process with pid 3259996 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3259996 00:29:00.564 Received shutdown signal, test time was about 10.000000 seconds 00:29:00.564 00:29:00.564 Latency(us) 00:29:00.564 [2024-12-06T17:05:48.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.564 [2024-12-06T17:05:48.391Z] =================================================================================================================== 00:29:00.564 [2024-12-06T17:05:48.391Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.564 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3259996 00:29:00.824 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.824 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:01.085 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:01.085 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:01.085 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:01.085 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:01.085 18:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:01.345 [2024-12-06 18:05:49.044571] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.345 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.346 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.346 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:01.346 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:01.346 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:01.346 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:01.606 request: 00:29:01.606 { 00:29:01.606 "uuid": "2a785f0b-2e08-4077-b1a6-729fbe46d422", 00:29:01.606 "method": "bdev_lvol_get_lvstores", 00:29:01.606 "req_id": 1 00:29:01.606 } 00:29:01.606 Got JSON-RPC error response 00:29:01.606 response: 00:29:01.606 { 00:29:01.606 "code": -19, 00:29:01.606 "message": "No such device" 00:29:01.606 } 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:01.606 aio_bdev 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f2b9bfe9-1b22-430a-a836-e2b0785bfe87 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f2b9bfe9-1b22-430a-a836-e2b0785bfe87 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:01.606 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:01.865 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f2b9bfe9-1b22-430a-a836-e2b0785bfe87 -t 2000 00:29:01.865 [ 00:29:01.865 { 00:29:01.865 "name": "f2b9bfe9-1b22-430a-a836-e2b0785bfe87", 00:29:01.865 "aliases": [ 00:29:01.865 "lvs/lvol" 00:29:01.865 ], 00:29:01.865 "product_name": "Logical Volume", 00:29:01.865 "block_size": 4096, 00:29:01.865 "num_blocks": 38912, 00:29:01.865 "uuid": "f2b9bfe9-1b22-430a-a836-e2b0785bfe87", 00:29:01.865 "assigned_rate_limits": { 00:29:01.865 "rw_ios_per_sec": 0, 00:29:01.865 "rw_mbytes_per_sec": 0, 00:29:01.865 "r_mbytes_per_sec": 0, 00:29:01.865 "w_mbytes_per_sec": 0 00:29:01.865 }, 00:29:01.865 "claimed": false, 00:29:01.865 "zoned": false, 00:29:01.865 "supported_io_types": { 00:29:01.865 "read": true, 00:29:01.865 "write": true, 00:29:01.865 "unmap": true, 00:29:01.865 "flush": false, 00:29:01.865 "reset": true, 00:29:01.865 "nvme_admin": false, 00:29:01.865 "nvme_io": false, 00:29:01.865 "nvme_io_md": false, 00:29:01.865 "write_zeroes": true, 00:29:01.865 "zcopy": false, 00:29:01.865 "get_zone_info": false, 00:29:01.865 "zone_management": false, 00:29:01.865 "zone_append": false, 00:29:01.865 "compare": false, 00:29:01.865 "compare_and_write": false, 00:29:01.865 "abort": false, 00:29:01.865 "seek_hole": true, 00:29:01.865 "seek_data": true, 00:29:01.865 "copy": false, 00:29:01.865 "nvme_iov_md": false 00:29:01.865 }, 00:29:01.865 "driver_specific": { 00:29:01.865 "lvol": { 00:29:01.865 "lvol_store_uuid": "2a785f0b-2e08-4077-b1a6-729fbe46d422", 00:29:01.865 "base_bdev": "aio_bdev", 00:29:01.865 "thin_provision": false, 00:29:01.865 "num_allocated_clusters": 38, 00:29:01.865 "snapshot": false, 00:29:01.865 "clone": false, 00:29:01.865 "esnap_clone": false 00:29:01.865 } 00:29:01.865 } 00:29:01.865 } 00:29:01.865 ] 00:29:02.126 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:02.126 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:02.126 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:02.126 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:02.126 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:02.126 18:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:02.386 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:02.386 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2b9bfe9-1b22-430a-a836-e2b0785bfe87 00:29:02.386 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a785f0b-2e08-4077-b1a6-729fbe46d422 00:29:02.647 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:02.908 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:02.908 00:29:02.909 real 0m15.209s 00:29:02.909 user 0m14.919s 00:29:02.909 sys 0m1.167s 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:02.909 ************************************ 00:29:02.909 END TEST lvs_grow_clean 00:29:02.909 ************************************ 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:02.909 ************************************ 00:29:02.909 START TEST lvs_grow_dirty 00:29:02.909 ************************************ 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:02.909 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:03.169 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:03.169 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:03.169 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:03.169 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:03.169 18:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:03.428 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:03.428 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:03.428 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 lvol 150 00:29:03.688 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:03.688 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:03.688 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:03.688 [2024-12-06 18:05:51.416498] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:03.688 [2024-12-06 18:05:51.416687] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:03.688 true 00:29:03.688 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:03.688 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:03.947 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:03.947 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:03.947 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:04.206 18:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:04.464 [2024-12-06 18:05:52.060995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3263390 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3263390 /var/tmp/bdevperf.sock 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3263390 ']' 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.464 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:04.465 [2024-12-06 18:05:52.264020] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:04.465 [2024-12-06 18:05:52.264074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263390 ] 00:29:04.724 [2024-12-06 18:05:52.329735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.724 [2024-12-06 18:05:52.359409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.724 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.724 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:04.724 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:04.982 Nvme0n1 00:29:04.982 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:04.982 [ 00:29:04.982 { 00:29:04.982 "name": "Nvme0n1", 00:29:04.982 "aliases": [ 00:29:04.982 "e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8" 00:29:04.982 ], 00:29:04.982 "product_name": "NVMe disk", 00:29:04.982 "block_size": 4096, 00:29:04.982 "num_blocks": 38912, 00:29:04.982 "uuid": "e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8", 00:29:04.982 "numa_id": 0, 00:29:04.982 "assigned_rate_limits": { 00:29:04.982 "rw_ios_per_sec": 0, 00:29:04.982 "rw_mbytes_per_sec": 0, 00:29:04.982 "r_mbytes_per_sec": 0, 00:29:04.982 "w_mbytes_per_sec": 0 00:29:04.982 }, 00:29:04.982 "claimed": false, 00:29:04.982 "zoned": false, 00:29:04.982 "supported_io_types": { 00:29:04.982 "read": true, 00:29:04.982 "write": true, 00:29:04.982 "unmap": true, 00:29:04.982 "flush": true, 00:29:04.982 "reset": true, 00:29:04.982 "nvme_admin": true, 00:29:04.982 "nvme_io": true, 00:29:04.982 "nvme_io_md": false, 00:29:04.982 "write_zeroes": true, 00:29:04.982 "zcopy": false, 00:29:04.982 "get_zone_info": false, 00:29:04.982 "zone_management": false, 00:29:04.982 "zone_append": false, 00:29:04.982 "compare": true, 00:29:04.982 "compare_and_write": true, 00:29:04.982 "abort": true, 00:29:04.982 "seek_hole": false, 00:29:04.982 "seek_data": false, 00:29:04.982 "copy": true, 00:29:04.982 "nvme_iov_md": false 00:29:04.982 }, 00:29:04.982 "memory_domains": [ 00:29:04.982 { 00:29:04.982 "dma_device_id": "system", 00:29:04.982 "dma_device_type": 1 00:29:04.982 } 00:29:04.982 ], 00:29:04.982 "driver_specific": { 00:29:04.982 "nvme": [ 00:29:04.982 { 00:29:04.982 "trid": { 00:29:04.982 "trtype": "TCP", 00:29:04.982 "adrfam": "IPv4", 00:29:04.982 "traddr": "10.0.0.2", 00:29:04.982 "trsvcid": "4420", 00:29:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:04.982 }, 00:29:04.982 "ctrlr_data": { 00:29:04.982 "cntlid": 1, 00:29:04.982 "vendor_id": "0x8086", 00:29:04.982 "model_number": "SPDK bdev Controller", 00:29:04.982 "serial_number": "SPDK0", 00:29:04.982 "firmware_revision": "25.01", 00:29:04.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.982 "oacs": { 00:29:04.982 "security": 0, 00:29:04.982 "format": 0, 00:29:04.982 "firmware": 0, 00:29:04.982 "ns_manage": 0 00:29:04.982 }, 00:29:04.982 "multi_ctrlr": true, 00:29:04.982 "ana_reporting": false 00:29:04.982 }, 00:29:04.982 "vs": { 00:29:04.982 "nvme_version": "1.3" 00:29:04.982 }, 00:29:04.982 "ns_data": { 00:29:04.982 "id": 1, 00:29:04.982 "can_share": true 00:29:04.982 } 00:29:04.983 } 00:29:04.983 ], 00:29:04.983 "mp_policy": "active_passive" 00:29:04.983 } 00:29:04.983 } 00:29:04.983 ] 00:29:04.983 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3263398 00:29:04.983 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:04.983 18:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:05.241 Running I/O for 10 seconds... 00:29:06.178 Latency(us) 00:29:06.178 [2024-12-06T17:05:54.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:06.178 Nvme0n1 : 1.00 25221.00 98.52 0.00 0.00 0.00 0.00 0.00 00:29:06.178 [2024-12-06T17:05:54.005Z] =================================================================================================================== 00:29:06.178 [2024-12-06T17:05:54.005Z] Total : 25221.00 98.52 0.00 0.00 0.00 0.00 0.00 00:29:06.178 00:29:07.115 18:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:07.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.115 Nvme0n1 : 2.00 25345.50 99.01 0.00 0.00 0.00 0.00 0.00 00:29:07.115 [2024-12-06T17:05:54.942Z] =================================================================================================================== 00:29:07.115 [2024-12-06T17:05:54.942Z] Total : 25345.50 99.01 0.00 0.00 0.00 0.00 0.00 00:29:07.115 00:29:07.377 true 00:29:07.377 18:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:07.377 18:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:07.377 18:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:07.377 18:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:07.377 18:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3263398 00:29:08.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.312 Nvme0n1 : 3.00 25427.67 99.33 0.00 0.00 0.00 0.00 0.00 00:29:08.312 [2024-12-06T17:05:56.140Z] =================================================================================================================== 00:29:08.313 [2024-12-06T17:05:56.140Z] Total : 25427.67 99.33 0.00 0.00 0.00 0.00 0.00 00:29:08.313 00:29:09.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.247 Nvme0n1 : 4.00 25469.75 99.49 0.00 0.00 0.00 0.00 0.00 00:29:09.247 [2024-12-06T17:05:57.074Z] =================================================================================================================== 00:29:09.247 [2024-12-06T17:05:57.074Z] Total : 25469.75 99.49 0.00 0.00 0.00 0.00 0.00 00:29:09.247 00:29:10.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.182 Nvme0n1 : 5.00 25494.40 99.59 0.00 0.00 0.00 0.00 0.00 00:29:10.182 [2024-12-06T17:05:58.009Z] =================================================================================================================== 00:29:10.182 [2024-12-06T17:05:58.009Z] Total : 25494.40 99.59 0.00 0.00 0.00 0.00 0.00 00:29:10.182 00:29:11.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.131 Nvme0n1 : 6.00 25531.83 99.73 0.00 0.00 0.00 0.00 0.00 00:29:11.131 [2024-12-06T17:05:58.958Z] =================================================================================================================== 00:29:11.131 [2024-12-06T17:05:58.958Z] Total : 25531.83 99.73 0.00 0.00 0.00 0.00 0.00 00:29:11.131 00:29:12.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.066 Nvme0n1 : 7.00 25542.86 99.78 0.00 0.00 0.00 0.00 0.00 00:29:12.066 [2024-12-06T17:05:59.893Z] =================================================================================================================== 00:29:12.066 [2024-12-06T17:05:59.893Z] Total : 25542.86 99.78 0.00 0.00 0.00 0.00 0.00 00:29:12.066 00:29:13.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.444 Nvme0n1 : 8.00 25555.38 99.83 0.00 0.00 0.00 0.00 0.00 00:29:13.444 [2024-12-06T17:06:01.271Z] =================================================================================================================== 00:29:13.444 [2024-12-06T17:06:01.271Z] Total : 25555.38 99.83 0.00 0.00 0.00 0.00 0.00 00:29:13.444 00:29:14.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.380 Nvme0n1 : 9.00 25561.22 99.85 0.00 0.00 0.00 0.00 0.00 00:29:14.380 [2024-12-06T17:06:02.207Z] =================================================================================================================== 00:29:14.380 [2024-12-06T17:06:02.207Z] Total : 25561.22 99.85 0.00 0.00 0.00 0.00 0.00 00:29:14.380 00:29:15.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.317 Nvme0n1 : 10.00 25577.30 99.91 0.00 0.00 0.00 0.00 0.00 00:29:15.317 [2024-12-06T17:06:03.144Z] =================================================================================================================== 00:29:15.317 [2024-12-06T17:06:03.144Z] Total : 25577.30 99.91 0.00 0.00 0.00 0.00 0.00 00:29:15.317 00:29:15.317 00:29:15.317 Latency(us) 00:29:15.317 [2024-12-06T17:06:03.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.317 Nvme0n1 : 10.00 25578.40 99.92 0.00 0.00 5001.45 1590.61 10212.69 00:29:15.317 [2024-12-06T17:06:03.144Z] =================================================================================================================== 00:29:15.317 [2024-12-06T17:06:03.144Z] Total : 25578.40 99.92 0.00 0.00 5001.45 1590.61 10212.69 00:29:15.317 { 00:29:15.317 "results": [ 00:29:15.317 { 00:29:15.317 "job": "Nvme0n1", 00:29:15.317 "core_mask": "0x2", 00:29:15.317 "workload": "randwrite", 00:29:15.317 "status": "finished", 00:29:15.317 "queue_depth": 128, 00:29:15.317 "io_size": 4096, 00:29:15.317 "runtime": 10.004574, 00:29:15.317 "iops": 25578.40043963891, 00:29:15.317 "mibps": 99.91562671733949, 00:29:15.317 "io_failed": 0, 00:29:15.317 "io_timeout": 0, 00:29:15.317 "avg_latency_us": 5001.452222018151, 00:29:15.317 "min_latency_us": 1590.6133333333332, 00:29:15.317 "max_latency_us": 10212.693333333333 00:29:15.317 } 00:29:15.317 ], 00:29:15.317 "core_count": 1 00:29:15.317 } 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3263390 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3263390 ']' 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3263390 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3263390 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3263390' 00:29:15.317 killing process with pid 3263390 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3263390 00:29:15.317 Received shutdown signal, test time was about 10.000000 seconds 00:29:15.317 00:29:15.317 Latency(us) 00:29:15.317 [2024-12-06T17:06:03.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.317 [2024-12-06T17:06:03.144Z] =================================================================================================================== 00:29:15.317 [2024-12-06T17:06:03.144Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.317 18:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3263390 00:29:15.317 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:15.576 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:15.576 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:15.576 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3259617 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3259617 00:29:15.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3259617 Killed "${NVMF_APP[@]}" "$@" 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3265840 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3265840 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3265840 ']' 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:15.835 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:15.835 [2024-12-06 18:06:03.605897] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:15.835 [2024-12-06 18:06:03.606885] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:15.835 [2024-12-06 18:06:03.606925] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.094 [2024-12-06 18:06:03.669436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.094 [2024-12-06 18:06:03.698229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.094 [2024-12-06 18:06:03.698256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.094 [2024-12-06 18:06:03.698262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.094 [2024-12-06 18:06:03.698266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.094 [2024-12-06 18:06:03.698271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.094 [2024-12-06 18:06:03.698748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.094 [2024-12-06 18:06:03.750177] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:16.094 [2024-12-06 18:06:03.750357] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.094 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:16.353 [2024-12-06 18:06:03.941616] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:16.353 [2024-12-06 18:06:03.941712] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:16.353 [2024-12-06 18:06:03.941736] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:16.353 18:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:16.353 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 -t 2000 00:29:16.611 [ 00:29:16.611 { 00:29:16.611 "name": "e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8", 00:29:16.611 "aliases": [ 00:29:16.611 "lvs/lvol" 00:29:16.611 ], 00:29:16.611 "product_name": "Logical Volume", 00:29:16.611 "block_size": 4096, 00:29:16.611 "num_blocks": 38912, 00:29:16.611 "uuid": "e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8", 00:29:16.611 "assigned_rate_limits": { 00:29:16.611 "rw_ios_per_sec": 0, 00:29:16.611 "rw_mbytes_per_sec": 0, 00:29:16.611 "r_mbytes_per_sec": 0, 00:29:16.611 "w_mbytes_per_sec": 0 00:29:16.611 }, 00:29:16.611 "claimed": false, 00:29:16.611 "zoned": false, 00:29:16.611 "supported_io_types": { 00:29:16.611 "read": true, 00:29:16.611 "write": true, 00:29:16.611 "unmap": true, 00:29:16.611 "flush": false, 00:29:16.611 "reset": true, 00:29:16.611 "nvme_admin": false, 00:29:16.611 "nvme_io": false, 00:29:16.611 "nvme_io_md": false, 00:29:16.612 "write_zeroes": true, 00:29:16.612 "zcopy": false, 00:29:16.612 "get_zone_info": false, 00:29:16.612 "zone_management": false, 00:29:16.612 "zone_append": false, 00:29:16.612 "compare": false, 00:29:16.612 "compare_and_write": false, 00:29:16.612 "abort": false, 00:29:16.612 "seek_hole": true, 00:29:16.612 "seek_data": true, 00:29:16.612 "copy": false, 00:29:16.612 "nvme_iov_md": false 00:29:16.612 }, 00:29:16.612 "driver_specific": { 00:29:16.612 "lvol": { 00:29:16.612 "lvol_store_uuid": "a1ffb420-e24e-48de-887f-1283ba6c3cd8", 00:29:16.612 "base_bdev": "aio_bdev", 00:29:16.612 "thin_provision": false, 00:29:16.612 "num_allocated_clusters": 38, 00:29:16.612 "snapshot": false, 00:29:16.612 "clone": false, 00:29:16.612 "esnap_clone": false 00:29:16.612 } 00:29:16.612 } 00:29:16.612 } 00:29:16.612 ] 00:29:16.612 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:16.612 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:16.612 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:16.870 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:16.870 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:16.870 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:16.870 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:16.870 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:17.129 [2024-12-06 18:06:04.771259] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:17.129 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:17.129 request: 00:29:17.129 { 00:29:17.129 "uuid": "a1ffb420-e24e-48de-887f-1283ba6c3cd8", 00:29:17.129 "method": "bdev_lvol_get_lvstores", 00:29:17.129 "req_id": 1 00:29:17.129 } 00:29:17.129 Got JSON-RPC error response 00:29:17.129 response: 00:29:17.129 { 00:29:17.129 "code": -19, 00:29:17.129 "message": "No such device" 00:29:17.129 } 00:29:17.389 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:29:17.389 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:17.389 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:17.389 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:17.389 18:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:17.389 aio_bdev 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:17.389 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:17.647 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 -t 2000 00:29:17.647 [ 00:29:17.647 { 00:29:17.647 "name": "e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8", 00:29:17.647 "aliases": [ 00:29:17.647 "lvs/lvol" 00:29:17.647 ], 00:29:17.647 "product_name": "Logical Volume", 00:29:17.647 "block_size": 4096, 00:29:17.647 "num_blocks": 38912, 00:29:17.647 "uuid": "e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8", 00:29:17.647 "assigned_rate_limits": { 00:29:17.647 "rw_ios_per_sec": 0, 00:29:17.647 "rw_mbytes_per_sec": 0, 00:29:17.647 "r_mbytes_per_sec": 0, 00:29:17.647 "w_mbytes_per_sec": 0 00:29:17.647 }, 00:29:17.647 "claimed": false, 00:29:17.647 "zoned": false, 00:29:17.647 "supported_io_types": { 00:29:17.647 "read": true, 00:29:17.647 "write": true, 00:29:17.647 "unmap": true, 00:29:17.647 "flush": false, 00:29:17.647 "reset": true, 00:29:17.647 "nvme_admin": false, 00:29:17.647 "nvme_io": false, 00:29:17.647 "nvme_io_md": false, 00:29:17.647 "write_zeroes": true, 00:29:17.647 "zcopy": false, 00:29:17.647 "get_zone_info": false, 00:29:17.647 "zone_management": false, 00:29:17.647 "zone_append": false, 00:29:17.647 "compare": false, 00:29:17.647 "compare_and_write": false, 00:29:17.647 "abort": false, 00:29:17.647 "seek_hole": true, 00:29:17.647 "seek_data": true, 00:29:17.647 "copy": false, 00:29:17.647 "nvme_iov_md": false 00:29:17.647 }, 00:29:17.647 "driver_specific": { 00:29:17.647 "lvol": { 00:29:17.647 "lvol_store_uuid": "a1ffb420-e24e-48de-887f-1283ba6c3cd8", 00:29:17.647 "base_bdev": "aio_bdev", 00:29:17.647 "thin_provision": false, 00:29:17.647 "num_allocated_clusters": 38, 00:29:17.647 "snapshot": false, 00:29:17.647 "clone": false, 00:29:17.647 "esnap_clone": false 00:29:17.647 } 00:29:17.647 } 00:29:17.647 } 00:29:17.647 ] 00:29:17.647 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:29:17.647 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:17.647 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:17.906 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:17.906 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:17.906 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:17.906 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:17.906 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e4e0591e-8797-4789-a8ad-3f9cb0eeb8e8 00:29:18.165 18:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1ffb420-e24e-48de-887f-1283ba6c3cd8 00:29:18.424 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:18.424 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:18.424 00:29:18.424 real 0m15.661s 00:29:18.424 user 0m33.998s 00:29:18.424 sys 0m2.611s 00:29:18.424 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.424 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:18.424 ************************************ 00:29:18.424 END TEST lvs_grow_dirty 00:29:18.424 ************************************ 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:18.683 nvmf_trace.0 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.683 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.684 rmmod nvme_tcp 00:29:18.684 rmmod nvme_fabrics 00:29:18.684 rmmod nvme_keyring 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3265840 ']' 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3265840 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3265840 ']' 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3265840 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3265840 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3265840' 00:29:18.684 killing process with pid 3265840 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3265840 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3265840 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.684 18:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.220 00:29:21.220 real 0m39.222s 00:29:21.220 user 0m50.943s 00:29:21.220 sys 0m8.152s 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:21.220 ************************************ 00:29:21.220 END TEST nvmf_lvs_grow 00:29:21.220 ************************************ 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:21.220 ************************************ 00:29:21.220 START TEST nvmf_bdev_io_wait 00:29:21.220 ************************************ 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:21.220 * Looking for test storage... 00:29:21.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.220 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:21.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.221 --rc genhtml_branch_coverage=1 00:29:21.221 --rc genhtml_function_coverage=1 00:29:21.221 --rc genhtml_legend=1 00:29:21.221 --rc geninfo_all_blocks=1 00:29:21.221 --rc geninfo_unexecuted_blocks=1 00:29:21.221 00:29:21.221 ' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:21.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.221 --rc genhtml_branch_coverage=1 00:29:21.221 --rc genhtml_function_coverage=1 00:29:21.221 --rc genhtml_legend=1 00:29:21.221 --rc geninfo_all_blocks=1 00:29:21.221 --rc geninfo_unexecuted_blocks=1 00:29:21.221 00:29:21.221 ' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:21.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.221 --rc genhtml_branch_coverage=1 00:29:21.221 --rc genhtml_function_coverage=1 00:29:21.221 --rc genhtml_legend=1 00:29:21.221 --rc geninfo_all_blocks=1 00:29:21.221 --rc geninfo_unexecuted_blocks=1 00:29:21.221 00:29:21.221 ' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:21.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.221 --rc genhtml_branch_coverage=1 00:29:21.221 --rc genhtml_function_coverage=1 00:29:21.221 --rc genhtml_legend=1 00:29:21.221 --rc geninfo_all_blocks=1 00:29:21.221 --rc geninfo_unexecuted_blocks=1 00:29:21.221 00:29:21.221 ' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.221 18:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:26.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:26.500 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.500 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:26.501 Found net devices under 0000:31:00.0: cvl_0_0 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:26.501 Found net devices under 0000:31:00.1: cvl_0_1 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.501 18:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:29:26.501 00:29:26.501 --- 10.0.0.2 ping statistics --- 00:29:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.501 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:29:26.501 00:29:26.501 --- 10.0.0.1 ping statistics --- 00:29:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.501 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3271350 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3271350 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3271350 ']' 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:26.501 18:06:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:26.501 [2024-12-06 18:06:14.293133] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:26.501 [2024-12-06 18:06:14.294107] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:26.501 [2024-12-06 18:06:14.294144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.761 [2024-12-06 18:06:14.380328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.761 [2024-12-06 18:06:14.417877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.761 [2024-12-06 18:06:14.417911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.761 [2024-12-06 18:06:14.417920] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.761 [2024-12-06 18:06:14.417926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.761 [2024-12-06 18:06:14.417932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.761 [2024-12-06 18:06:14.419756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.761 [2024-12-06 18:06:14.419871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.761 [2024-12-06 18:06:14.420021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.761 [2024-12-06 18:06:14.420022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.761 [2024-12-06 18:06:14.420280] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.330 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.330 [2024-12-06 18:06:15.139485] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:27.330 [2024-12-06 18:06:15.139487] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:27.331 [2024-12-06 18:06:15.139573] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:27.331 [2024-12-06 18:06:15.139586] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:27.331 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.331 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.331 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.331 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.331 [2024-12-06 18:06:15.144778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.590 Malloc0 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:27.590 [2024-12-06 18:06:15.192609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3271696 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3271697 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3271699 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3271701 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.590 { 00:29:27.590 "params": { 00:29:27.590 "name": "Nvme$subsystem", 00:29:27.590 "trtype": "$TEST_TRANSPORT", 00:29:27.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.590 "adrfam": "ipv4", 00:29:27.590 "trsvcid": "$NVMF_PORT", 00:29:27.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.590 "hdgst": ${hdgst:-false}, 00:29:27.590 "ddgst": ${ddgst:-false} 00:29:27.590 }, 00:29:27.590 "method": "bdev_nvme_attach_controller" 00:29:27.590 } 00:29:27.590 EOF 00:29:27.590 )") 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:27.590 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.591 { 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme$subsystem", 00:29:27.591 "trtype": "$TEST_TRANSPORT", 00:29:27.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "$NVMF_PORT", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.591 "hdgst": ${hdgst:-false}, 00:29:27.591 "ddgst": ${ddgst:-false} 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 } 00:29:27.591 EOF 00:29:27.591 )") 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3271696 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.591 { 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme$subsystem", 00:29:27.591 "trtype": "$TEST_TRANSPORT", 00:29:27.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "$NVMF_PORT", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.591 "hdgst": ${hdgst:-false}, 00:29:27.591 "ddgst": ${ddgst:-false} 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 } 00:29:27.591 EOF 00:29:27.591 )") 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.591 { 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme$subsystem", 00:29:27.591 "trtype": "$TEST_TRANSPORT", 00:29:27.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "$NVMF_PORT", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.591 "hdgst": ${hdgst:-false}, 00:29:27.591 "ddgst": ${ddgst:-false} 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 } 00:29:27.591 EOF 00:29:27.591 )") 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme1", 00:29:27.591 "trtype": "tcp", 00:29:27.591 "traddr": "10.0.0.2", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "4420", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.591 "hdgst": false, 00:29:27.591 "ddgst": false 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 }' 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme1", 00:29:27.591 "trtype": "tcp", 00:29:27.591 "traddr": "10.0.0.2", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "4420", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.591 "hdgst": false, 00:29:27.591 "ddgst": false 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 }' 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme1", 00:29:27.591 "trtype": "tcp", 00:29:27.591 "traddr": "10.0.0.2", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "4420", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.591 "hdgst": false, 00:29:27.591 "ddgst": false 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 }' 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:27.591 18:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.591 "params": { 00:29:27.591 "name": "Nvme1", 00:29:27.591 "trtype": "tcp", 00:29:27.591 "traddr": "10.0.0.2", 00:29:27.591 "adrfam": "ipv4", 00:29:27.591 "trsvcid": "4420", 00:29:27.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.591 "hdgst": false, 00:29:27.591 "ddgst": false 00:29:27.591 }, 00:29:27.591 "method": "bdev_nvme_attach_controller" 00:29:27.591 }' 00:29:27.591 [2024-12-06 18:06:15.227372] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:27.591 [2024-12-06 18:06:15.227416] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:27.591 [2024-12-06 18:06:15.229997] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:27.591 [2024-12-06 18:06:15.230043] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:27.591 [2024-12-06 18:06:15.230279] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:27.591 [2024-12-06 18:06:15.230325] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:27.591 [2024-12-06 18:06:15.231878] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:27.591 [2024-12-06 18:06:15.231923] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:27.591 [2024-12-06 18:06:15.351784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.591 [2024-12-06 18:06:15.380242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:27.591 [2024-12-06 18:06:15.400807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.850 [2024-12-06 18:06:15.429218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:27.850 [2024-12-06 18:06:15.451087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.850 [2024-12-06 18:06:15.480619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:27.850 [2024-12-06 18:06:15.503401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.850 [2024-12-06 18:06:15.532585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:27.850 Running I/O for 1 seconds... 00:29:28.109 Running I/O for 1 seconds... 00:29:28.109 Running I/O for 1 seconds... 00:29:28.109 Running I/O for 1 seconds... 00:29:29.046 14644.00 IOPS, 57.20 MiB/s 00:29:29.046 Latency(us) 00:29:29.046 [2024-12-06T17:06:16.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.046 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:29.046 Nvme1n1 : 1.01 14703.30 57.43 0.00 0.00 8680.21 4205.23 10813.44 00:29:29.046 [2024-12-06T17:06:16.873Z] =================================================================================================================== 00:29:29.046 [2024-12-06T17:06:16.873Z] Total : 14703.30 57.43 0.00 0.00 8680.21 4205.23 10813.44 00:29:29.046 181008.00 IOPS, 707.06 MiB/s 00:29:29.046 Latency(us) 00:29:29.046 [2024-12-06T17:06:16.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.046 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:29.046 Nvme1n1 : 1.00 180650.90 705.67 0.00 0.00 704.41 296.96 1966.08 00:29:29.046 [2024-12-06T17:06:16.873Z] =================================================================================================================== 00:29:29.046 [2024-12-06T17:06:16.873Z] Total : 180650.90 705.67 0.00 0.00 704.41 296.96 1966.08 00:29:29.046 11842.00 IOPS, 46.26 MiB/s 00:29:29.046 Latency(us) 00:29:29.046 [2024-12-06T17:06:16.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.046 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:29.046 Nvme1n1 : 1.01 11913.68 46.54 0.00 0.00 10708.00 2225.49 15182.51 00:29:29.046 [2024-12-06T17:06:16.873Z] =================================================================================================================== 00:29:29.046 [2024-12-06T17:06:16.873Z] Total : 11913.68 46.54 0.00 0.00 10708.00 2225.49 15182.51 00:29:29.046 11509.00 IOPS, 44.96 MiB/s 00:29:29.046 Latency(us) 00:29:29.046 [2024-12-06T17:06:16.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.046 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:29.046 Nvme1n1 : 1.01 11569.55 45.19 0.00 0.00 11028.63 4532.91 16711.68 00:29:29.046 [2024-12-06T17:06:16.873Z] =================================================================================================================== 00:29:29.046 [2024-12-06T17:06:16.873Z] Total : 11569.55 45.19 0.00 0.00 11028.63 4532.91 16711.68 00:29:29.046 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3271697 00:29:29.046 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3271699 00:29:29.047 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3271701 00:29:29.047 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:29.047 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.047 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.306 rmmod nvme_tcp 00:29:29.306 rmmod nvme_fabrics 00:29:29.306 rmmod nvme_keyring 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3271350 ']' 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3271350 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3271350 ']' 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3271350 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3271350 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3271350' 00:29:29.306 killing process with pid 3271350 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3271350 00:29:29.306 18:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3271350 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.306 18:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.843 00:29:31.843 real 0m10.538s 00:29:31.843 user 0m13.933s 00:29:31.843 sys 0m5.674s 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:31.843 ************************************ 00:29:31.843 END TEST nvmf_bdev_io_wait 00:29:31.843 ************************************ 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:31.843 ************************************ 00:29:31.843 START TEST nvmf_queue_depth 00:29:31.843 ************************************ 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:31.843 * Looking for test storage... 00:29:31.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.843 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.844 --rc genhtml_branch_coverage=1 00:29:31.844 --rc genhtml_function_coverage=1 00:29:31.844 --rc genhtml_legend=1 00:29:31.844 --rc geninfo_all_blocks=1 00:29:31.844 --rc geninfo_unexecuted_blocks=1 00:29:31.844 00:29:31.844 ' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.844 --rc genhtml_branch_coverage=1 00:29:31.844 --rc genhtml_function_coverage=1 00:29:31.844 --rc genhtml_legend=1 00:29:31.844 --rc geninfo_all_blocks=1 00:29:31.844 --rc geninfo_unexecuted_blocks=1 00:29:31.844 00:29:31.844 ' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.844 --rc genhtml_branch_coverage=1 00:29:31.844 --rc genhtml_function_coverage=1 00:29:31.844 --rc genhtml_legend=1 00:29:31.844 --rc geninfo_all_blocks=1 00:29:31.844 --rc geninfo_unexecuted_blocks=1 00:29:31.844 00:29:31.844 ' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:31.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.844 --rc genhtml_branch_coverage=1 00:29:31.844 --rc genhtml_function_coverage=1 00:29:31.844 --rc genhtml_legend=1 00:29:31.844 --rc geninfo_all_blocks=1 00:29:31.844 --rc geninfo_unexecuted_blocks=1 00:29:31.844 00:29:31.844 ' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:31.844 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.845 18:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.120 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:37.121 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:37.121 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:37.121 Found net devices under 0000:31:00.0: cvl_0_0 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:37.121 Found net devices under 0000:31:00.1: cvl_0_1 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:29:37.121 00:29:37.121 --- 10.0.0.2 ping statistics --- 00:29:37.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.121 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:29:37.121 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:29:37.121 00:29:37.121 --- 10.0.0.1 ping statistics --- 00:29:37.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.122 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3276392 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3276392 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3276392 ']' 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:37.122 18:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:37.122 [2024-12-06 18:06:24.776362] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:37.122 [2024-12-06 18:06:24.777343] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:37.122 [2024-12-06 18:06:24.777382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.122 [2024-12-06 18:06:24.852374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.122 [2024-12-06 18:06:24.880947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.122 [2024-12-06 18:06:24.880976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.122 [2024-12-06 18:06:24.880981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.122 [2024-12-06 18:06:24.880986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.122 [2024-12-06 18:06:24.880990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.122 [2024-12-06 18:06:24.881446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.122 [2024-12-06 18:06:24.932209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.122 [2024-12-06 18:06:24.932386] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.059 [2024-12-06 18:06:25.586164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.059 Malloc0 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.059 [2024-12-06 18:06:25.641882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.059 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3276524 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3276524 /var/tmp/bdevperf.sock 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3276524 ']' 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:38.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.060 18:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:38.060 [2024-12-06 18:06:25.680169] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:29:38.060 [2024-12-06 18:06:25.680218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276524 ] 00:29:38.060 [2024-12-06 18:06:25.758038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.060 [2024-12-06 18:06:25.794004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:38.994 NVMe0n1 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.994 18:06:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:38.994 Running I/O for 10 seconds... 00:29:40.863 11264.00 IOPS, 44.00 MiB/s [2024-12-06T17:06:29.626Z] 11878.00 IOPS, 46.40 MiB/s [2024-12-06T17:06:31.004Z] 12357.33 IOPS, 48.27 MiB/s [2024-12-06T17:06:31.940Z] 12705.75 IOPS, 49.63 MiB/s [2024-12-06T17:06:32.878Z] 12901.80 IOPS, 50.40 MiB/s [2024-12-06T17:06:33.816Z] 13056.00 IOPS, 51.00 MiB/s [2024-12-06T17:06:34.756Z] 13168.29 IOPS, 51.44 MiB/s [2024-12-06T17:06:35.692Z] 13242.00 IOPS, 51.73 MiB/s [2024-12-06T17:06:36.632Z] 13311.44 IOPS, 52.00 MiB/s [2024-12-06T17:06:36.920Z] 13336.40 IOPS, 52.10 MiB/s 00:29:49.093 Latency(us) 00:29:49.093 [2024-12-06T17:06:36.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.093 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:49.093 Verification LBA range: start 0x0 length 0x4000 00:29:49.093 NVMe0n1 : 10.04 13374.75 52.25 0.00 0.00 76292.04 10649.60 56360.96 00:29:49.093 [2024-12-06T17:06:36.920Z] =================================================================================================================== 00:29:49.093 [2024-12-06T17:06:36.920Z] Total : 13374.75 52.25 0.00 0.00 76292.04 10649.60 56360.96 00:29:49.093 { 00:29:49.093 "results": [ 00:29:49.093 { 00:29:49.093 "job": "NVMe0n1", 00:29:49.093 "core_mask": "0x1", 00:29:49.093 "workload": "verify", 00:29:49.093 "status": "finished", 00:29:49.093 "verify_range": { 00:29:49.093 "start": 0, 00:29:49.093 "length": 16384 00:29:49.093 }, 00:29:49.093 "queue_depth": 1024, 00:29:49.093 "io_size": 4096, 00:29:49.093 "runtime": 10.043629, 00:29:49.093 "iops": 13374.74731493965, 00:29:49.093 "mibps": 52.24510669898301, 00:29:49.093 "io_failed": 0, 00:29:49.093 "io_timeout": 0, 00:29:49.093 "avg_latency_us": 76292.03869565972, 00:29:49.093 "min_latency_us": 10649.6, 00:29:49.093 "max_latency_us": 56360.96 00:29:49.093 } 00:29:49.093 ], 00:29:49.093 "core_count": 1 00:29:49.093 } 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3276524 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3276524 ']' 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3276524 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276524 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276524' 00:29:49.093 killing process with pid 3276524 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3276524 00:29:49.093 Received shutdown signal, test time was about 10.000000 seconds 00:29:49.093 00:29:49.093 Latency(us) 00:29:49.093 [2024-12-06T17:06:36.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.093 [2024-12-06T17:06:36.920Z] =================================================================================================================== 00:29:49.093 [2024-12-06T17:06:36.920Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3276524 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.093 rmmod nvme_tcp 00:29:49.093 rmmod nvme_fabrics 00:29:49.093 rmmod nvme_keyring 00:29:49.093 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3276392 ']' 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3276392 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3276392 ']' 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3276392 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276392 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276392' 00:29:49.420 killing process with pid 3276392 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3276392 00:29:49.420 18:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3276392 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.420 18:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.323 00:29:51.323 real 0m19.925s 00:29:51.323 user 0m23.580s 00:29:51.323 sys 0m5.512s 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:51.323 ************************************ 00:29:51.323 END TEST nvmf_queue_depth 00:29:51.323 ************************************ 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.323 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:51.583 ************************************ 00:29:51.583 START TEST nvmf_target_multipath 00:29:51.583 ************************************ 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:51.583 * Looking for test storage... 00:29:51.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:51.583 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.584 --rc genhtml_branch_coverage=1 00:29:51.584 --rc genhtml_function_coverage=1 00:29:51.584 --rc genhtml_legend=1 00:29:51.584 --rc geninfo_all_blocks=1 00:29:51.584 --rc geninfo_unexecuted_blocks=1 00:29:51.584 00:29:51.584 ' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.584 --rc genhtml_branch_coverage=1 00:29:51.584 --rc genhtml_function_coverage=1 00:29:51.584 --rc genhtml_legend=1 00:29:51.584 --rc geninfo_all_blocks=1 00:29:51.584 --rc geninfo_unexecuted_blocks=1 00:29:51.584 00:29:51.584 ' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.584 --rc genhtml_branch_coverage=1 00:29:51.584 --rc genhtml_function_coverage=1 00:29:51.584 --rc genhtml_legend=1 00:29:51.584 --rc geninfo_all_blocks=1 00:29:51.584 --rc geninfo_unexecuted_blocks=1 00:29:51.584 00:29:51.584 ' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.584 --rc genhtml_branch_coverage=1 00:29:51.584 --rc genhtml_function_coverage=1 00:29:51.584 --rc genhtml_legend=1 00:29:51.584 --rc geninfo_all_blocks=1 00:29:51.584 --rc geninfo_unexecuted_blocks=1 00:29:51.584 00:29:51.584 ' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.584 18:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:56.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:56.862 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.862 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:56.863 Found net devices under 0000:31:00.0: cvl_0_0 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:56.863 Found net devices under 0000:31:00.1: cvl_0_1 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:29:56.863 00:29:56.863 --- 10.0.0.2 ping statistics --- 00:29:56.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.863 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:29:56.863 00:29:56.863 --- 10.0.0.1 ping statistics --- 00:29:56.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.863 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:56.863 only one NIC for nvmf test 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.863 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.863 rmmod nvme_tcp 00:29:56.864 rmmod nvme_fabrics 00:29:56.864 rmmod nvme_keyring 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.864 18:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.405 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.406 00:29:59.406 real 0m7.564s 00:29:59.406 user 0m1.469s 00:29:59.406 sys 0m3.930s 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:59.406 ************************************ 00:29:59.406 END TEST nvmf_target_multipath 00:29:59.406 ************************************ 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:59.406 ************************************ 00:29:59.406 START TEST nvmf_zcopy 00:29:59.406 ************************************ 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:59.406 * Looking for test storage... 00:29:59.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:59.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.406 --rc genhtml_branch_coverage=1 00:29:59.406 --rc genhtml_function_coverage=1 00:29:59.406 --rc genhtml_legend=1 00:29:59.406 --rc geninfo_all_blocks=1 00:29:59.406 --rc geninfo_unexecuted_blocks=1 00:29:59.406 00:29:59.406 ' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:59.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.406 --rc genhtml_branch_coverage=1 00:29:59.406 --rc genhtml_function_coverage=1 00:29:59.406 --rc genhtml_legend=1 00:29:59.406 --rc geninfo_all_blocks=1 00:29:59.406 --rc geninfo_unexecuted_blocks=1 00:29:59.406 00:29:59.406 ' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:59.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.406 --rc genhtml_branch_coverage=1 00:29:59.406 --rc genhtml_function_coverage=1 00:29:59.406 --rc genhtml_legend=1 00:29:59.406 --rc geninfo_all_blocks=1 00:29:59.406 --rc geninfo_unexecuted_blocks=1 00:29:59.406 00:29:59.406 ' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:59.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.406 --rc genhtml_branch_coverage=1 00:29:59.406 --rc genhtml_function_coverage=1 00:29:59.406 --rc genhtml_legend=1 00:29:59.406 --rc geninfo_all_blocks=1 00:29:59.406 --rc geninfo_unexecuted_blocks=1 00:29:59.406 00:29:59.406 ' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.406 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.407 18:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:04.681 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:04.682 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:04.682 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:04.682 Found net devices under 0000:31:00.0: cvl_0_0 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:04.682 Found net devices under 0000:31:00.1: cvl_0_1 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.682 18:06:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:30:04.682 00:30:04.682 --- 10.0.0.2 ping statistics --- 00:30:04.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.682 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:30:04.682 00:30:04.682 --- 10.0.0.1 ping statistics --- 00:30:04.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.682 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.682 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3287447 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3287447 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3287447 ']' 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:04.683 18:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:04.683 [2024-12-06 18:06:52.300254] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:04.683 [2024-12-06 18:06:52.301263] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:30:04.683 [2024-12-06 18:06:52.301309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.683 [2024-12-06 18:06:52.387359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.683 [2024-12-06 18:06:52.430448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.683 [2024-12-06 18:06:52.430496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.683 [2024-12-06 18:06:52.430504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.683 [2024-12-06 18:06:52.430512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.683 [2024-12-06 18:06:52.430518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.683 [2024-12-06 18:06:52.431217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.683 [2024-12-06 18:06:52.504319] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.683 [2024-12-06 18:06:52.504590] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 [2024-12-06 18:06:53.128036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 [2024-12-06 18:06:53.144333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 malloc0 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.620 { 00:30:05.620 "params": { 00:30:05.620 "name": "Nvme$subsystem", 00:30:05.620 "trtype": "$TEST_TRANSPORT", 00:30:05.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.620 "adrfam": "ipv4", 00:30:05.620 "trsvcid": "$NVMF_PORT", 00:30:05.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.620 "hdgst": ${hdgst:-false}, 00:30:05.620 "ddgst": ${ddgst:-false} 00:30:05.620 }, 00:30:05.620 "method": "bdev_nvme_attach_controller" 00:30:05.620 } 00:30:05.620 EOF 00:30:05.620 )") 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:05.620 18:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.620 "params": { 00:30:05.620 "name": "Nvme1", 00:30:05.620 "trtype": "tcp", 00:30:05.620 "traddr": "10.0.0.2", 00:30:05.620 "adrfam": "ipv4", 00:30:05.620 "trsvcid": "4420", 00:30:05.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.620 "hdgst": false, 00:30:05.620 "ddgst": false 00:30:05.620 }, 00:30:05.620 "method": "bdev_nvme_attach_controller" 00:30:05.620 }' 00:30:05.620 [2024-12-06 18:06:53.218648] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:30:05.620 [2024-12-06 18:06:53.218725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287774 ] 00:30:05.620 [2024-12-06 18:06:53.304525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.620 [2024-12-06 18:06:53.357921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.187 Running I/O for 10 seconds... 00:30:08.056 6765.00 IOPS, 52.85 MiB/s [2024-12-06T17:06:56.817Z] 6735.00 IOPS, 52.62 MiB/s [2024-12-06T17:06:57.755Z] 7603.67 IOPS, 59.40 MiB/s [2024-12-06T17:06:59.133Z] 8202.00 IOPS, 64.08 MiB/s [2024-12-06T17:07:00.070Z] 8566.80 IOPS, 66.93 MiB/s [2024-12-06T17:07:01.005Z] 8810.83 IOPS, 68.83 MiB/s [2024-12-06T17:07:01.940Z] 8980.29 IOPS, 70.16 MiB/s [2024-12-06T17:07:02.878Z] 9107.38 IOPS, 71.15 MiB/s [2024-12-06T17:07:03.815Z] 9209.89 IOPS, 71.95 MiB/s [2024-12-06T17:07:03.815Z] 9292.60 IOPS, 72.60 MiB/s 00:30:15.988 Latency(us) 00:30:15.988 [2024-12-06T17:07:03.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.988 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:15.988 Verification LBA range: start 0x0 length 0x1000 00:30:15.988 Nvme1n1 : 10.01 9294.68 72.61 0.00 0.00 13729.49 2416.64 27088.21 00:30:15.988 [2024-12-06T17:07:03.815Z] =================================================================================================================== 00:30:15.988 [2024-12-06T17:07:03.815Z] Total : 9294.68 72.61 0.00 0.00 13729.49 2416.64 27088.21 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3290092 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:16.248 { 00:30:16.248 "params": { 00:30:16.248 "name": "Nvme$subsystem", 00:30:16.248 "trtype": "$TEST_TRANSPORT", 00:30:16.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:16.248 "adrfam": "ipv4", 00:30:16.248 "trsvcid": "$NVMF_PORT", 00:30:16.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:16.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:16.248 "hdgst": ${hdgst:-false}, 00:30:16.248 "ddgst": ${ddgst:-false} 00:30:16.248 }, 00:30:16.248 "method": "bdev_nvme_attach_controller" 00:30:16.248 } 00:30:16.248 EOF 00:30:16.248 )") 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:16.248 [2024-12-06 18:07:03.855602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.855630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:16.248 18:07:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:16.248 "params": { 00:30:16.248 "name": "Nvme1", 00:30:16.248 "trtype": "tcp", 00:30:16.248 "traddr": "10.0.0.2", 00:30:16.248 "adrfam": "ipv4", 00:30:16.248 "trsvcid": "4420", 00:30:16.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:16.248 "hdgst": false, 00:30:16.248 "ddgst": false 00:30:16.248 }, 00:30:16.248 "method": "bdev_nvme_attach_controller" 00:30:16.248 }' 00:30:16.248 [2024-12-06 18:07:03.863568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.863578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.248 [2024-12-06 18:07:03.871567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.871576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.248 [2024-12-06 18:07:03.879567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.879575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.248 [2024-12-06 18:07:03.882515] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:30:16.248 [2024-12-06 18:07:03.882563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290092 ] 00:30:16.248 [2024-12-06 18:07:03.887566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.887576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.248 [2024-12-06 18:07:03.899567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.899575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.248 [2024-12-06 18:07:03.907566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.248 [2024-12-06 18:07:03.907574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.915567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.915575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.923567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.923574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.931566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.931574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.939567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.939574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.947150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.249 [2024-12-06 18:07:03.947567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.947574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.955567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.955575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.963567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.963576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.971568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.971578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.976207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.249 [2024-12-06 18:07:03.979567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.979578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.987570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.987578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:03.995574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:03.995586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.003569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.003580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.011568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.011578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.019568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.019576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.027567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.027580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.035567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.035575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.043573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.043587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.051569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.051580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.059568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.059577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.249 [2024-12-06 18:07:04.067568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.249 [2024-12-06 18:07:04.067577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.075568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.075579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.083568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.083577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.091566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.091574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.099566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.099574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.107566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.107573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.115566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.115574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.123566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.123574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.131567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.131576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.139566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.139573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.147566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.147573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.155566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.155573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.163566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.163573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.171566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.171576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.179567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.179577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.187567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.187574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.195567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.195574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.203567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.203574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.211566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.211574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.219567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.219575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 [2024-12-06 18:07:04.227575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.509 [2024-12-06 18:07:04.227590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.509 Running I/O for 5 seconds... 00:30:16.509 [2024-12-06 18:07:04.235570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.235583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.246206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.246223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.253614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.253629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.263133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.263149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.268783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.268798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.278482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.278497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.284340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.284355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.294656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.294672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.300283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.300298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.310321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.310336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.316242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.316257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.326386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.326401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.510 [2024-12-06 18:07:04.334263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.510 [2024-12-06 18:07:04.334281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.342306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.342321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.350224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.350239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.359029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.359044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.364937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.364952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.374104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.374119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.382916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.382931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.388681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.388696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.397961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.397976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.407365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.407380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.420063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.420077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.432263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.432278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.444377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.444392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.456329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.456344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.468893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.468908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.479621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.479637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.485523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.485538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.494131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.494146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.502892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.502908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.508468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.508483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.518063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.518078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.526785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.526801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.532493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.532507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.542673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.542688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.548511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.548526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.558518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.558533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.564341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.564356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.574424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.574439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.580190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.580205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:16.770 [2024-12-06 18:07:04.589984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:16.770 [2024-12-06 18:07:04.590001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.599316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.599332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.605016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.605031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.615078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.615094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.620833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.620848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.630316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.630331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.638961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.638977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.644729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.644744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.654705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.654720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.660342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.660357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.670285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.670300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.676157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.676172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.686526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.686542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.692455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.692470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.702319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.702334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.709624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.709640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.720304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.720319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.732505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.732520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.744510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.744525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.756454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.756470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.768864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.768880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.780486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.780502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.792468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.792483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.804591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.804606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.816406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.816420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.828655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.828671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.840756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.840771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.030 [2024-12-06 18:07:04.852807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.030 [2024-12-06 18:07:04.852822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.864682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.864698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.876746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.876763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.888646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.888661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.900526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.900541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.912453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.912469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.923994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.924009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.936384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.936400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.948864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.948879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.959524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.959540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.965220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.965235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.974764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.974780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.980647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.980663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.990184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.990199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:04.999410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:04.999426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.011842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.011857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.024421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.024437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.036329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.036344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.047810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.047826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.053691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.053707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.062412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.062428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.068179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.068194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.079743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.079759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.085320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.085335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.094075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.094091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.102896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.290 [2024-12-06 18:07:05.102912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.290 [2024-12-06 18:07:05.108722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.291 [2024-12-06 18:07:05.108737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.118785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.118802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.124544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.124559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.134563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.134579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.140274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.140289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.150456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.150471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.156331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.156346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.166106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.166122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.174926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.174941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.180591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.180605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.190862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.190877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.196559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.196574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.205701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.205719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.216409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.216423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.549 [2024-12-06 18:07:05.228456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.549 [2024-12-06 18:07:05.228470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 19507.00 IOPS, 152.40 MiB/s [2024-12-06T17:07:05.377Z] [2024-12-06 18:07:05.240459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.240474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.252603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.252617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.264423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.264438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.276615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.276630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.287680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.287695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.293690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.293705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.302880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.302895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.308649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.308663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.318501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.318516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.324188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.324203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.333948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.333963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.343386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.343401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.349185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.349200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.358497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.358513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.364144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.364158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.550 [2024-12-06 18:07:05.373983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.550 [2024-12-06 18:07:05.373998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.383403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.383422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.389089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.389108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.397737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.397751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.407140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.407155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.412732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.412746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.422080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.422095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.430748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.430763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.436475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.436489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.446639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.446654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.452444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.452459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.462332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.462347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.470563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.470578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.476202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.476217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.485954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.485969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.494883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.494898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.500602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.500617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.510583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.510598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.516324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.516339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.526131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.526147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.534697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.534716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.540552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.540566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.550269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.550284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.559004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.559020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.564630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.564644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.574107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.574122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.583244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.583259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.588809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.588823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.598259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.598273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.606761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.606776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.612465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.612479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.622355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.622369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.809 [2024-12-06 18:07:05.629600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.809 [2024-12-06 18:07:05.629615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.639560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.639575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.645296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.645310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.653940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.653954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.663304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.663319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.669026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.669041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.678416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.678431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.684142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.684157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.694213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.694228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.702767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.702781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.708791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.708805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.719788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.719803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.725509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.725524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.734089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.734108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.742966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.742981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.748838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.748853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.759242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.759257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.771791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.771806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.778090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.778109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.786799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.786814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.792582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.792597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.802690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.802705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.808302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.808316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.818470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.818484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.824170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.824185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.834430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.834445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.840314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.840328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.850622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.850637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.856536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.856551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.866500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.866515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.873638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.873653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.883577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.883592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.069 [2024-12-06 18:07:05.889470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.069 [2024-12-06 18:07:05.889485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.898061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.898076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.907327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.907342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.919846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.919861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.932267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.932282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.944231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.944246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.956478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.956493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.968656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.968671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.980587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.980602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:05.992604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:05.992619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.004607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.004622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.016502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.016517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.028224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.028239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.040670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.040686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.051660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.051675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.057368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.057383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.066894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.066909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.072709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.072724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.082812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.082828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.088521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.088536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.098059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.098075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.106685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.106701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.112460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.112475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.122219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.122235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.128176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.128191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.138423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.328 [2024-12-06 18:07:06.138438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.328 [2024-12-06 18:07:06.144235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.329 [2024-12-06 18:07:06.144249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.329 [2024-12-06 18:07:06.153949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.329 [2024-12-06 18:07:06.153964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.163182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.163198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.168917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.168932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.178179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.178194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.185530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.185548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.196343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.196358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.208564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.208579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.220418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.220434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.232616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.232632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 19576.50 IOPS, 152.94 MiB/s [2024-12-06T17:07:06.415Z] [2024-12-06 18:07:06.244525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.244541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.256322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.256338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.268723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.268738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.280512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.280527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.292422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.292438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.304960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.304976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.315980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.315995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.328027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.328042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.340627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.340642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.351427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.351443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.357321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.357336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.365959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.365975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.374857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.374873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.380618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.380634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.390757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.390776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.396348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.396364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.406810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.406826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.588 [2024-12-06 18:07:06.412618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.588 [2024-12-06 18:07:06.412633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.422760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.422776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.428700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.428715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.438129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.438144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.446879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.446895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.452757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.452772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.462245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.462261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.469608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.469623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.478615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.478631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.484293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.484308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.494482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.494498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.502242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.502257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.509780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.509795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.519639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.519655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.525342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.525357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.533904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.533919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.543263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.543282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.555807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.555822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.562141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.562156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.568805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.568820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.579762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.579777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.585572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.585587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.594039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.594055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.602855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.602870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.608542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.608558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.618613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.618628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.624382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.624397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.634601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.634617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.640443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.640457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.650010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.650026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.658784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.658799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.848 [2024-12-06 18:07:06.664577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.848 [2024-12-06 18:07:06.664591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.674654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.674669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.680517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.680531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.690915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.690930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.696520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.696542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.705972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.705987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.715391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.715406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.721277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.721291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.729864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.729878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.739241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.739256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.745156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.745171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.753758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.753772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.107 [2024-12-06 18:07:06.763226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.107 [2024-12-06 18:07:06.763241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.768828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.768842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.778257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.778272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.786812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.786828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.792729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.792744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.801951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.801966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.811305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.811320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.816975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.816990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.826417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.826432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.832136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.832151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.842394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.842410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.848184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.848199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.857949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.857964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.866960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.866975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.872619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.872633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.882351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.882366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.890841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.890855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.896628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.896642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.906202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.906217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.914886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.914900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.920647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.920662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.108 [2024-12-06 18:07:06.929932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.108 [2024-12-06 18:07:06.929947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.367 [2024-12-06 18:07:06.939502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.367 [2024-12-06 18:07:06.939519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.367 [2024-12-06 18:07:06.945078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.367 [2024-12-06 18:07:06.945093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.367 [2024-12-06 18:07:06.954931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.367 [2024-12-06 18:07:06.954946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.367 [2024-12-06 18:07:06.960544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.367 [2024-12-06 18:07:06.960559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:06.970667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:06.970682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:06.976440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:06.976454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:06.986433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:06.986448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:06.992158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:06.992172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.002332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.002347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.008237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.008251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.018441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.018455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.024127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.024142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.034462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.034476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.040313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.040327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.050328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.050344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.058158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.058173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.065681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.065695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.076356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.076371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.088667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.088681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.100646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.100661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.112427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.112441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.124967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.124982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.136711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.136725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.148051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.148066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.160448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.160462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.171642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.171657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.177495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.177510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.368 [2024-12-06 18:07:07.186232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.368 [2024-12-06 18:07:07.186247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.627 [2024-12-06 18:07:07.194890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.627 [2024-12-06 18:07:07.194906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.627 [2024-12-06 18:07:07.200604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.627 [2024-12-06 18:07:07.200619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.627 [2024-12-06 18:07:07.210023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.627 [2024-12-06 18:07:07.210039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.218640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.218655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.224274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.224289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.234084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.234099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.243078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.243093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 19588.33 IOPS, 153.03 MiB/s [2024-12-06T17:07:07.455Z] [2024-12-06 18:07:07.249714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.249729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.257721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.257736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.266641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.266656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.272497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.272512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.282033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.282048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.290798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.290813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.296669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.296685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.307214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.307229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.319762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.319778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.326038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.326053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.334879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.334898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.340514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.340528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.350225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.350239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.358745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.358760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.365059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.365073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.375547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.375561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.381341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.381356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.389875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.389890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.399372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.399387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.411675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.411690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.418192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.418207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.425116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.425130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.435195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.435210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.441025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.441039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.628 [2024-12-06 18:07:07.449554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.628 [2024-12-06 18:07:07.449569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.460098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.460117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.472335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.472350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.484542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.484557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.496337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.496351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.508403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.508421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.520533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.520547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.532387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.532402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.544613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.544628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.556692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.556707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.568878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.568892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.580230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.580245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.592070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.592085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.604601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.604616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.616839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.616854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.627376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.627391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.633262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.633277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.642643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.642658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.648526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.648541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.658530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.658545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.664369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.664383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.674524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.674539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.887 [2024-12-06 18:07:07.681608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.887 [2024-12-06 18:07:07.681623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.888 [2024-12-06 18:07:07.692222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.888 [2024-12-06 18:07:07.692237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.888 [2024-12-06 18:07:07.704778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.888 [2024-12-06 18:07:07.704797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.715546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.715562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.721162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.721177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.730776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.730792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.736762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.736777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.747003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.747018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.752829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.752844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.762225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.762240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.770936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.770951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.776749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.776764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.786085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.786107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.795496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.795512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.801115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.801130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.809799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.809814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.819164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.819179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.824814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.824829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.834309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.834324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.842907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.842922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.848779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.848795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.858971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.858987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.147 [2024-12-06 18:07:07.864755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.147 [2024-12-06 18:07:07.864770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.874828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.874845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.880631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.880645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.891044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.891060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.896733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.896748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.906774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.906790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.912527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.912542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.922146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.922161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.931319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.931335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.943804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.943819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.949828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.949844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.958422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.958438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.148 [2024-12-06 18:07:07.964208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.148 [2024-12-06 18:07:07.964223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.407 [2024-12-06 18:07:07.974337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.407 [2024-12-06 18:07:07.974352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.407 [2024-12-06 18:07:07.982272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.407 [2024-12-06 18:07:07.982287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.407 [2024-12-06 18:07:07.990192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.407 [2024-12-06 18:07:07.990207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:07.998748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:07.998763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.004533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.004548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.014827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.014843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.020411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.020426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.030453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.030469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.036111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.036126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.047166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.047181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.052859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.052874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.062409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.062425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.069613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.069628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.078981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.078997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.084805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.084820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.094993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.095009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.100767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.100783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.110683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.110698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.116335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.116351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.126537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.126553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.132322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.132337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.147064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.147080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.152856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.152871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.162135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.162151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.171500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.171517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.177274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.177290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.185901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.185916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.194827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.194842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.200541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.200555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.210575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.210591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.216316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.216331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.226481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.226496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.408 [2024-12-06 18:07:08.232168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.408 [2024-12-06 18:07:08.232182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.241956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.241971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 19586.50 IOPS, 153.02 MiB/s [2024-12-06T17:07:08.494Z] [2024-12-06 18:07:08.251351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.251366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.263978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.263992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.276224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.276238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.288670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.288685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.300629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.300643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.312756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.312771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.324614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.324629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.336336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.336350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.348223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.348242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.360421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.360436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.372822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.372837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.384567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.384582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.396408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.396422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.408398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.408413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.420837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.420852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.432399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.432414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.444521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.444536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.456398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.456413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.468322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.468337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.480818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.480832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.667 [2024-12-06 18:07:08.491664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.667 [2024-12-06 18:07:08.491679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.497577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.497592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.506183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.506197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.515352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.515367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.527934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.527949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.540376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.540391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.552427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.552443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.564208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.564227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.576707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.576722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.588177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.588192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.600276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.600291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.612244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.612258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.926 [2024-12-06 18:07:08.624097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.926 [2024-12-06 18:07:08.624117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.636485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.636499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.646414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.646430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.655023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.655038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.660629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.660644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.670456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.670471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.676257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.676272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.686286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.686301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.694833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.694847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.700519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.700534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.710559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.710574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.716256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.716270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.726141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.726156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.734931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.734946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.740686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.740703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.927 [2024-12-06 18:07:08.750656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.927 [2024-12-06 18:07:08.750671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.756581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.756596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.766204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.766219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.774998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.775013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.780707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.780721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.790046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.790061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.798766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.798781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.804539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.804554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.814682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.814697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.820418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.820432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.830400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.830414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.836301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.836315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.846610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.846624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.852270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.852285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.861805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.861819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.871296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.871310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.876890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.876904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.887131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.887146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.892779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.187 [2024-12-06 18:07:08.892797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.187 [2024-12-06 18:07:08.903002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.903017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.908923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.908938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.918914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.918929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.924612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.924627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.934574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.934589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.940519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.940533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.950441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.950456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.956259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.956274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.966408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.966423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.972152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.972167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.982273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.982288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.990974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.990989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:08.996676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:08.996690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:09.006731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:09.006746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-12-06 18:07:09.012553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-12-06 18:07:09.012568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.022749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.022764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.028658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.028673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.038367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.038382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.044192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.044207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.054153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.054169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.063474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.063488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.076232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.076248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.088681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.088697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.100346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.100361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.112772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.112787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.123748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.123764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.129480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.129494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.138174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.138189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.146965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.146980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.152658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.152672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.162982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.162998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.168713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.168728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.178770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.178786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.191492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.191508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.197340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.197355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.206711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.206726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.212288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.212304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.222277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.222293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.230877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.230892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.236973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.236988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 19608.40 IOPS, 153.19 MiB/s [2024-12-06T17:07:09.274Z] [2024-12-06 18:07:09.247972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.247987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 00:30:21.447 Latency(us) 00:30:21.447 [2024-12-06T17:07:09.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.447 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:21.447 Nvme1n1 : 5.01 19611.73 153.22 0.00 0.00 6521.33 2676.05 10813.44 00:30:21.447 [2024-12-06T17:07:09.274Z] =================================================================================================================== 00:30:21.447 [2024-12-06T17:07:09.274Z] Total : 19611.73 153.22 0.00 0.00 6521.33 2676.05 10813.44 00:30:21.447 [2024-12-06 18:07:09.255573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.255588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.263571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.263584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.447 [2024-12-06 18:07:09.271570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.447 [2024-12-06 18:07:09.271578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.279574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.279585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.287571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.287582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.295571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.295581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.303569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.303578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.311569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.311577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.319567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.319576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.327567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.327576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.335569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.335579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.343568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.343582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 [2024-12-06 18:07:09.351567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.707 [2024-12-06 18:07:09.351576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3290092) - No such process 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3290092 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:21.707 delay0 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.707 18:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:21.707 [2024-12-06 18:07:09.505250] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:28.277 Initializing NVMe Controllers 00:30:28.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.277 Initialization complete. Launching workers. 00:30:28.277 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1791 00:30:28.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2066, failed to submit 45 00:30:28.278 success 1902, unsuccessful 164, failed 0 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:28.278 rmmod nvme_tcp 00:30:28.278 rmmod nvme_fabrics 00:30:28.278 rmmod nvme_keyring 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3287447 ']' 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3287447 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3287447 ']' 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3287447 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287447 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287447' 00:30:28.278 killing process with pid 3287447 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3287447 00:30:28.278 18:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3287447 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:28.278 18:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:30.812 00:30:30.812 real 0m31.370s 00:30:30.812 user 0m42.440s 00:30:30.812 sys 0m9.762s 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:30.812 ************************************ 00:30:30.812 END TEST nvmf_zcopy 00:30:30.812 ************************************ 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:30.812 ************************************ 00:30:30.812 START TEST nvmf_nmic 00:30:30.812 ************************************ 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:30.812 * Looking for test storage... 00:30:30.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:30.812 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.813 --rc genhtml_branch_coverage=1 00:30:30.813 --rc genhtml_function_coverage=1 00:30:30.813 --rc genhtml_legend=1 00:30:30.813 --rc geninfo_all_blocks=1 00:30:30.813 --rc geninfo_unexecuted_blocks=1 00:30:30.813 00:30:30.813 ' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.813 --rc genhtml_branch_coverage=1 00:30:30.813 --rc genhtml_function_coverage=1 00:30:30.813 --rc genhtml_legend=1 00:30:30.813 --rc geninfo_all_blocks=1 00:30:30.813 --rc geninfo_unexecuted_blocks=1 00:30:30.813 00:30:30.813 ' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.813 --rc genhtml_branch_coverage=1 00:30:30.813 --rc genhtml_function_coverage=1 00:30:30.813 --rc genhtml_legend=1 00:30:30.813 --rc geninfo_all_blocks=1 00:30:30.813 --rc geninfo_unexecuted_blocks=1 00:30:30.813 00:30:30.813 ' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:30.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.813 --rc genhtml_branch_coverage=1 00:30:30.813 --rc genhtml_function_coverage=1 00:30:30.813 --rc genhtml_legend=1 00:30:30.813 --rc geninfo_all_blocks=1 00:30:30.813 --rc geninfo_unexecuted_blocks=1 00:30:30.813 00:30:30.813 ' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.813 18:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:36.112 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:36.112 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:36.112 Found net devices under 0000:31:00.0: cvl_0_0 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:36.112 Found net devices under 0000:31:00.1: cvl_0_1 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.112 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:36.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:30:36.113 00:30:36.113 --- 10.0.0.2 ping statistics --- 00:30:36.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.113 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:30:36.113 00:30:36.113 --- 10.0.0.1 ping statistics --- 00:30:36.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.113 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3297078 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3297078 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3297078 ']' 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:36.113 18:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.113 [2024-12-06 18:07:23.915863] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:36.113 [2024-12-06 18:07:23.917016] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:30:36.113 [2024-12-06 18:07:23.917065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.372 [2024-12-06 18:07:24.001349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:36.372 [2024-12-06 18:07:24.049514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.372 [2024-12-06 18:07:24.049568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.372 [2024-12-06 18:07:24.049577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.372 [2024-12-06 18:07:24.049582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.372 [2024-12-06 18:07:24.049587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.372 [2024-12-06 18:07:24.051788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.372 [2024-12-06 18:07:24.051956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.372 [2024-12-06 18:07:24.052134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.372 [2024-12-06 18:07:24.052139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.372 [2024-12-06 18:07:24.126601] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:36.372 [2024-12-06 18:07:24.127646] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:36.372 [2024-12-06 18:07:24.127653] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:36.372 [2024-12-06 18:07:24.127710] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:36.372 [2024-12-06 18:07:24.127726] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:36.372 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.373 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.373 [2024-12-06 18:07:24.181198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 Malloc0 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 [2024-12-06 18:07:24.241189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:36.632 test case1: single bdev can't be used in multiple subsystems 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 [2024-12-06 18:07:24.264819] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:36.632 [2024-12-06 18:07:24.264843] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:36.632 [2024-12-06 18:07:24.264851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:36.632 request: 00:30:36.632 { 00:30:36.632 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:36.632 "namespace": { 00:30:36.632 "bdev_name": "Malloc0", 00:30:36.632 "no_auto_visible": false, 00:30:36.632 "hide_metadata": false 00:30:36.632 }, 00:30:36.632 "method": "nvmf_subsystem_add_ns", 00:30:36.632 "req_id": 1 00:30:36.632 } 00:30:36.632 Got JSON-RPC error response 00:30:36.632 response: 00:30:36.632 { 00:30:36.632 "code": -32602, 00:30:36.632 "message": "Invalid parameters" 00:30:36.632 } 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:36.632 Adding namespace failed - expected result. 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:36.632 test case2: host connect to nvmf target in multiple paths 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:36.632 [2024-12-06 18:07:24.272965] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.632 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:36.891 18:07:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:37.460 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:37.460 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:37.460 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:37.460 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:37.460 18:07:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:39.365 18:07:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:39.365 [global] 00:30:39.365 thread=1 00:30:39.365 invalidate=1 00:30:39.365 rw=write 00:30:39.365 time_based=1 00:30:39.365 runtime=1 00:30:39.365 ioengine=libaio 00:30:39.365 direct=1 00:30:39.365 bs=4096 00:30:39.365 iodepth=1 00:30:39.365 norandommap=0 00:30:39.365 numjobs=1 00:30:39.365 00:30:39.365 verify_dump=1 00:30:39.365 verify_backlog=512 00:30:39.365 verify_state_save=0 00:30:39.365 do_verify=1 00:30:39.365 verify=crc32c-intel 00:30:39.365 [job0] 00:30:39.365 filename=/dev/nvme0n1 00:30:39.365 Could not set queue depth (nvme0n1) 00:30:39.625 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:39.625 fio-3.35 00:30:39.625 Starting 1 thread 00:30:41.003 00:30:41.003 job0: (groupid=0, jobs=1): err= 0: pid=3297949: Fri Dec 6 18:07:28 2024 00:30:41.003 read: IOPS=19, BW=78.0KiB/s (79.8kB/s)(80.0KiB/1026msec) 00:30:41.003 slat (nsec): min=11599, max=28066, avg=22728.80, stdev=7133.42 00:30:41.003 clat (usec): min=858, max=42061, avg=39578.89, stdev=9125.95 00:30:41.003 lat (usec): min=871, max=42088, avg=39601.62, stdev=9128.16 00:30:41.003 clat percentiles (usec): 00:30:41.003 | 1.00th=[ 857], 5.00th=[ 857], 10.00th=[40633], 20.00th=[41157], 00:30:41.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:30:41.003 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:41.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:41.003 | 99.99th=[42206] 00:30:41.003 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:30:41.003 slat (nsec): min=3412, max=32053, avg=11973.16, stdev=5262.55 00:30:41.003 clat (usec): min=237, max=691, avg=441.81, stdev=86.83 00:30:41.003 lat (usec): min=240, max=706, avg=453.78, stdev=89.59 00:30:41.003 clat percentiles (usec): 00:30:41.003 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 363], 00:30:41.003 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 453], 60.00th=[ 474], 00:30:41.003 | 70.00th=[ 486], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 578], 00:30:41.003 | 99.00th=[ 627], 99.50th=[ 635], 99.90th=[ 693], 99.95th=[ 693], 00:30:41.003 | 99.99th=[ 693] 00:30:41.003 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:41.003 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:41.003 lat (usec) : 250=0.38%, 500=73.50%, 750=22.37%, 1000=0.19% 00:30:41.003 lat (msec) : 50=3.57% 00:30:41.003 cpu : usr=0.59%, sys=0.78%, ctx=532, majf=0, minf=1 00:30:41.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:41.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:41.003 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:41.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:41.003 00:30:41.003 Run status group 0 (all jobs): 00:30:41.003 READ: bw=78.0KiB/s (79.8kB/s), 78.0KiB/s-78.0KiB/s (79.8kB/s-79.8kB/s), io=80.0KiB (81.9kB), run=1026-1026msec 00:30:41.003 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:30:41.003 00:30:41.003 Disk stats (read/write): 00:30:41.003 nvme0n1: ios=66/512, merge=0/0, ticks=893/193, in_queue=1086, util=97.49% 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:41.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.003 rmmod nvme_tcp 00:30:41.003 rmmod nvme_fabrics 00:30:41.003 rmmod nvme_keyring 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3297078 ']' 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3297078 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3297078 ']' 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3297078 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3297078 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3297078' 00:30:41.003 killing process with pid 3297078 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3297078 00:30:41.003 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3297078 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.262 18:07:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.166 00:30:43.166 real 0m12.774s 00:30:43.166 user 0m32.588s 00:30:43.166 sys 0m5.840s 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.166 ************************************ 00:30:43.166 END TEST nvmf_nmic 00:30:43.166 ************************************ 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.166 18:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.425 ************************************ 00:30:43.425 START TEST nvmf_fio_target 00:30:43.425 ************************************ 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:43.425 * Looking for test storage... 00:30:43.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:43.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.425 --rc genhtml_branch_coverage=1 00:30:43.425 --rc genhtml_function_coverage=1 00:30:43.425 --rc genhtml_legend=1 00:30:43.425 --rc geninfo_all_blocks=1 00:30:43.425 --rc geninfo_unexecuted_blocks=1 00:30:43.425 00:30:43.425 ' 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:43.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.425 --rc genhtml_branch_coverage=1 00:30:43.425 --rc genhtml_function_coverage=1 00:30:43.425 --rc genhtml_legend=1 00:30:43.425 --rc geninfo_all_blocks=1 00:30:43.425 --rc geninfo_unexecuted_blocks=1 00:30:43.425 00:30:43.425 ' 00:30:43.425 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:43.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.425 --rc genhtml_branch_coverage=1 00:30:43.425 --rc genhtml_function_coverage=1 00:30:43.425 --rc genhtml_legend=1 00:30:43.425 --rc geninfo_all_blocks=1 00:30:43.425 --rc geninfo_unexecuted_blocks=1 00:30:43.425 00:30:43.426 ' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.426 --rc genhtml_branch_coverage=1 00:30:43.426 --rc genhtml_function_coverage=1 00:30:43.426 --rc genhtml_legend=1 00:30:43.426 --rc geninfo_all_blocks=1 00:30:43.426 --rc geninfo_unexecuted_blocks=1 00:30:43.426 00:30:43.426 ' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.426 18:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.828 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.828 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.828 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:48.829 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:48.829 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:48.829 Found net devices under 0000:31:00.0: cvl_0_0 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:48.829 Found net devices under 0000:31:00.1: cvl_0_1 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.829 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:48.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:48.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:30:48.830 00:30:48.830 --- 10.0.0.2 ping statistics --- 00:30:48.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.830 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:48.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:48.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:30:48.830 00:30:48.830 --- 10.0.0.1 ping statistics --- 00:30:48.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:48.830 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3302628 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3302628 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3302628 ']' 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:48.830 18:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:49.091 [2024-12-06 18:07:36.672183] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.091 [2024-12-06 18:07:36.673348] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:30:49.091 [2024-12-06 18:07:36.673401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.091 [2024-12-06 18:07:36.765945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.091 [2024-12-06 18:07:36.818705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.091 [2024-12-06 18:07:36.818756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.091 [2024-12-06 18:07:36.818765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.091 [2024-12-06 18:07:36.818772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.091 [2024-12-06 18:07:36.818779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.091 [2024-12-06 18:07:36.821138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.091 [2024-12-06 18:07:36.821238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.091 [2024-12-06 18:07:36.821397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.091 [2024-12-06 18:07:36.821397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.091 [2024-12-06 18:07:36.900184] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.091 [2024-12-06 18:07:36.900470] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.091 [2024-12-06 18:07:36.901112] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.091 [2024-12-06 18:07:36.901163] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:49.091 [2024-12-06 18:07:36.901169] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:49.662 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.662 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:49.662 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.662 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.662 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:49.961 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.961 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:49.961 [2024-12-06 18:07:37.650333] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.961 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:50.221 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:50.221 18:07:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:50.480 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:50.480 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:50.480 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:50.480 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:50.741 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:50.741 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:51.001 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.001 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:51.001 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.261 18:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:51.261 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.522 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:51.522 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:51.782 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:51.782 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:51.782 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.042 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:52.042 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:52.299 18:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.299 [2024-12-06 18:07:40.018127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.299 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:52.558 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:52.558 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:53.125 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:53.125 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:30:53.125 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:53.125 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:30:53.125 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:30:53.125 18:07:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:30:55.032 18:07:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:55.032 [global] 00:30:55.032 thread=1 00:30:55.032 invalidate=1 00:30:55.032 rw=write 00:30:55.032 time_based=1 00:30:55.032 runtime=1 00:30:55.032 ioengine=libaio 00:30:55.032 direct=1 00:30:55.032 bs=4096 00:30:55.032 iodepth=1 00:30:55.032 norandommap=0 00:30:55.032 numjobs=1 00:30:55.032 00:30:55.032 verify_dump=1 00:30:55.032 verify_backlog=512 00:30:55.032 verify_state_save=0 00:30:55.032 do_verify=1 00:30:55.032 verify=crc32c-intel 00:30:55.032 [job0] 00:30:55.032 filename=/dev/nvme0n1 00:30:55.032 [job1] 00:30:55.032 filename=/dev/nvme0n2 00:30:55.032 [job2] 00:30:55.032 filename=/dev/nvme0n3 00:30:55.032 [job3] 00:30:55.032 filename=/dev/nvme0n4 00:30:55.292 Could not set queue depth (nvme0n1) 00:30:55.292 Could not set queue depth (nvme0n2) 00:30:55.292 Could not set queue depth (nvme0n3) 00:30:55.292 Could not set queue depth (nvme0n4) 00:30:55.553 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:55.553 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:55.553 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:55.553 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:55.553 fio-3.35 00:30:55.553 Starting 4 threads 00:30:56.937 00:30:56.937 job0: (groupid=0, jobs=1): err= 0: pid=3304202: Fri Dec 6 18:07:44 2024 00:30:56.937 read: IOPS=16, BW=66.0KiB/s (67.5kB/s)(68.0KiB/1031msec) 00:30:56.937 slat (nsec): min=10979, max=26788, avg=24369.76, stdev=4998.16 00:30:56.937 clat (usec): min=1305, max=42202, avg=39488.99, stdev=9842.83 00:30:56.937 lat (usec): min=1316, max=42228, avg=39513.36, stdev=9846.24 00:30:56.937 clat percentiles (usec): 00:30:56.937 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[41157], 20.00th=[41681], 00:30:56.937 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:30:56.937 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:56.937 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:56.937 | 99.99th=[42206] 00:30:56.937 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:30:56.937 slat (nsec): min=4237, max=42420, avg=13639.69, stdev=4022.20 00:30:56.937 clat (usec): min=321, max=1052, avg=683.27, stdev=137.70 00:30:56.937 lat (usec): min=329, max=1067, avg=696.91, stdev=138.91 00:30:56.937 clat percentiles (usec): 00:30:56.937 | 1.00th=[ 347], 5.00th=[ 437], 10.00th=[ 490], 20.00th=[ 570], 00:30:56.937 | 30.00th=[ 611], 40.00th=[ 660], 50.00th=[ 701], 60.00th=[ 734], 00:30:56.937 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 881], 00:30:56.937 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1057], 99.95th=[ 1057], 00:30:56.937 | 99.99th=[ 1057] 00:30:56.937 bw ( KiB/s): min= 4096, max= 4096, per=45.78%, avg=4096.00, stdev= 0.00, samples=1 00:30:56.937 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:56.937 lat (usec) : 500=10.21%, 750=53.69%, 1000=31.76% 00:30:56.937 lat (msec) : 2=1.32%, 50=3.02% 00:30:56.937 cpu : usr=0.39%, sys=0.49%, ctx=530, majf=0, minf=1 00:30:56.937 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.937 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.937 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:56.937 job1: (groupid=0, jobs=1): err= 0: pid=3304203: Fri Dec 6 18:07:44 2024 00:30:56.937 read: IOPS=16, BW=66.3KiB/s (67.9kB/s)(68.0KiB/1026msec) 00:30:56.937 slat (nsec): min=11628, max=27284, avg=25481.76, stdev=4278.40 00:30:56.937 clat (usec): min=40878, max=42114, avg=41736.63, stdev=421.74 00:30:56.937 lat (usec): min=40905, max=42141, avg=41762.11, stdev=424.08 00:30:56.937 clat percentiles (usec): 00:30:56.937 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:56.937 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:30:56.937 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:56.937 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:56.937 | 99.99th=[42206] 00:30:56.937 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:30:56.937 slat (nsec): min=4255, max=49449, avg=13904.39, stdev=4574.30 00:30:56.937 clat (usec): min=90, max=1008, avg=598.58, stdev=164.40 00:30:56.937 lat (usec): min=95, max=1023, avg=612.48, stdev=165.40 00:30:56.937 clat percentiles (usec): 00:30:56.937 | 1.00th=[ 235], 5.00th=[ 330], 10.00th=[ 367], 20.00th=[ 449], 00:30:56.937 | 30.00th=[ 519], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:30:56.937 | 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 816], 95.00th=[ 848], 00:30:56.937 | 99.00th=[ 914], 99.50th=[ 971], 99.90th=[ 1012], 99.95th=[ 1012], 00:30:56.937 | 99.99th=[ 1012] 00:30:56.937 bw ( KiB/s): min= 4096, max= 4096, per=45.78%, avg=4096.00, stdev= 0.00, samples=1 00:30:56.937 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:56.937 lat (usec) : 100=0.19%, 250=1.32%, 500=25.90%, 750=49.91%, 1000=19.28% 00:30:56.937 lat (msec) : 2=0.19%, 50=3.21% 00:30:56.937 cpu : usr=0.29%, sys=0.68%, ctx=532, majf=0, minf=1 00:30:56.937 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.937 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.937 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:56.937 job2: (groupid=0, jobs=1): err= 0: pid=3304204: Fri Dec 6 18:07:44 2024 00:30:56.937 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:30:56.937 slat (nsec): min=11427, max=31475, avg=16489.01, stdev=3228.01 00:30:56.937 clat (usec): min=696, max=1356, avg=1058.61, stdev=98.09 00:30:56.937 lat (usec): min=716, max=1367, avg=1075.10, stdev=98.19 00:30:56.937 clat percentiles (usec): 00:30:56.937 | 1.00th=[ 799], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 988], 00:30:56.937 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:30:56.937 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:30:56.937 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1352], 99.95th=[ 1352], 00:30:56.937 | 99.99th=[ 1352] 00:30:56.937 write: IOPS=591, BW=2366KiB/s (2422kB/s)(2368KiB/1001msec); 0 zone resets 00:30:56.937 slat (nsec): min=4115, max=44838, avg=14925.02, stdev=4099.12 00:30:56.937 clat (usec): min=145, max=1089, avg=736.66, stdev=146.44 00:30:56.937 lat (usec): min=150, max=1104, avg=751.58, stdev=147.30 00:30:56.937 clat percentiles (usec): 00:30:56.937 | 1.00th=[ 355], 5.00th=[ 478], 10.00th=[ 537], 20.00th=[ 611], 00:30:56.937 | 30.00th=[ 676], 40.00th=[ 709], 50.00th=[ 750], 60.00th=[ 799], 00:30:56.938 | 70.00th=[ 832], 80.00th=[ 865], 90.00th=[ 906], 95.00th=[ 938], 00:30:56.938 | 99.00th=[ 1012], 99.50th=[ 1029], 99.90th=[ 1090], 99.95th=[ 1090], 00:30:56.938 | 99.99th=[ 1090] 00:30:56.938 bw ( KiB/s): min= 4096, max= 4096, per=45.78%, avg=4096.00, stdev= 0.00, samples=1 00:30:56.938 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:56.938 lat (usec) : 250=0.18%, 500=3.08%, 750=23.91%, 1000=36.14% 00:30:56.938 lat (msec) : 2=36.68% 00:30:56.938 cpu : usr=1.00%, sys=1.50%, ctx=1104, majf=0, minf=2 00:30:56.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.938 issued rwts: total=512,592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:56.938 job3: (groupid=0, jobs=1): err= 0: pid=3304205: Fri Dec 6 18:07:44 2024 00:30:56.938 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:30:56.938 slat (nsec): min=11490, max=43612, avg=16009.89, stdev=3737.13 00:30:56.938 clat (usec): min=657, max=1332, avg=1005.66, stdev=105.26 00:30:56.938 lat (usec): min=670, max=1345, avg=1021.67, stdev=104.73 00:30:56.938 clat percentiles (usec): 00:30:56.938 | 1.00th=[ 742], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[ 930], 00:30:56.938 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:30:56.938 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:30:56.938 | 99.00th=[ 1237], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:30:56.938 | 99.99th=[ 1336] 00:30:56.938 write: IOPS=689, BW=2757KiB/s (2823kB/s)(2760KiB/1001msec); 0 zone resets 00:30:56.938 slat (nsec): min=4172, max=44910, avg=14398.10, stdev=3954.23 00:30:56.938 clat (usec): min=229, max=1449, avg=670.11, stdev=142.41 00:30:56.938 lat (usec): min=234, max=1494, avg=684.51, stdev=143.63 00:30:56.938 clat percentiles (usec): 00:30:56.938 | 1.00th=[ 302], 5.00th=[ 429], 10.00th=[ 494], 20.00th=[ 553], 00:30:56.938 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[ 709], 00:30:56.938 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 881], 00:30:56.938 | 99.00th=[ 1004], 99.50th=[ 1037], 99.90th=[ 1450], 99.95th=[ 1450], 00:30:56.938 | 99.99th=[ 1450] 00:30:56.938 bw ( KiB/s): min= 4096, max= 4096, per=45.78%, avg=4096.00, stdev= 0.00, samples=1 00:30:56.938 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:56.938 lat (usec) : 250=0.17%, 500=6.16%, 750=35.44%, 1000=34.03% 00:30:56.938 lat (msec) : 2=24.21% 00:30:56.938 cpu : usr=1.00%, sys=1.60%, ctx=1203, majf=0, minf=1 00:30:56.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.938 issued rwts: total=512,690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.938 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:56.938 00:30:56.938 Run status group 0 (all jobs): 00:30:56.938 READ: bw=4105KiB/s (4203kB/s), 66.0KiB/s-2046KiB/s (67.5kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1031msec 00:30:56.938 WRITE: bw=8947KiB/s (9161kB/s), 1986KiB/s-2757KiB/s (2034kB/s-2823kB/s), io=9224KiB (9445kB), run=1001-1031msec 00:30:56.938 00:30:56.938 Disk stats (read/write): 00:30:56.938 nvme0n1: ios=68/512, merge=0/0, ticks=571/344, in_queue=915, util=87.37% 00:30:56.938 nvme0n2: ios=67/512, merge=0/0, ticks=956/298, in_queue=1254, util=90.09% 00:30:56.938 nvme0n3: ios=468/512, merge=0/0, ticks=525/367, in_queue=892, util=94.81% 00:30:56.938 nvme0n4: ios=484/512, merge=0/0, ticks=1326/347, in_queue=1673, util=94.11% 00:30:56.938 18:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:56.938 [global] 00:30:56.938 thread=1 00:30:56.938 invalidate=1 00:30:56.938 rw=randwrite 00:30:56.938 time_based=1 00:30:56.938 runtime=1 00:30:56.938 ioengine=libaio 00:30:56.938 direct=1 00:30:56.938 bs=4096 00:30:56.938 iodepth=1 00:30:56.938 norandommap=0 00:30:56.938 numjobs=1 00:30:56.938 00:30:56.938 verify_dump=1 00:30:56.938 verify_backlog=512 00:30:56.938 verify_state_save=0 00:30:56.938 do_verify=1 00:30:56.938 verify=crc32c-intel 00:30:56.938 [job0] 00:30:56.938 filename=/dev/nvme0n1 00:30:56.938 [job1] 00:30:56.938 filename=/dev/nvme0n2 00:30:56.938 [job2] 00:30:56.938 filename=/dev/nvme0n3 00:30:56.938 [job3] 00:30:56.938 filename=/dev/nvme0n4 00:30:56.938 Could not set queue depth (nvme0n1) 00:30:56.938 Could not set queue depth (nvme0n2) 00:30:56.938 Could not set queue depth (nvme0n3) 00:30:56.938 Could not set queue depth (nvme0n4) 00:30:56.938 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:56.938 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:56.938 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:56.938 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:56.938 fio-3.35 00:30:56.938 Starting 4 threads 00:30:58.316 00:30:58.316 job0: (groupid=0, jobs=1): err= 0: pid=3304728: Fri Dec 6 18:07:45 2024 00:30:58.316 read: IOPS=45, BW=182KiB/s (186kB/s)(188KiB/1034msec) 00:30:58.316 slat (nsec): min=10841, max=26221, avg=21764.68, stdev=4844.46 00:30:58.316 clat (usec): min=635, max=42078, avg=15674.24, stdev=19585.00 00:30:58.316 lat (usec): min=646, max=42104, avg=15696.00, stdev=19586.05 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 635], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1057], 00:30:58.316 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1188], 60.00th=[ 1237], 00:30:58.316 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:30:58.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:58.316 | 99.99th=[42206] 00:30:58.316 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:30:58.316 slat (nsec): min=3368, max=69926, avg=12778.93, stdev=3860.83 00:30:58.316 clat (usec): min=164, max=943, avg=561.73, stdev=134.27 00:30:58.316 lat (usec): min=167, max=957, avg=574.51, stdev=134.44 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 253], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 445], 00:30:58.316 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 603], 00:30:58.316 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 775], 00:30:58.316 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 947], 00:30:58.316 | 99.99th=[ 947] 00:30:58.316 bw ( KiB/s): min= 4096, max= 4096, per=42.21%, avg=4096.00, stdev= 0.00, samples=1 00:30:58.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:58.316 lat (usec) : 250=0.89%, 500=28.44%, 750=54.74%, 1000=8.41% 00:30:58.316 lat (msec) : 2=4.47%, 50=3.04% 00:30:58.316 cpu : usr=0.58%, sys=1.16%, ctx=560, majf=0, minf=1 00:30:58.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.316 job1: (groupid=0, jobs=1): err= 0: pid=3304729: Fri Dec 6 18:07:45 2024 00:30:58.316 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:30:58.316 slat (nsec): min=4254, max=42492, avg=14424.90, stdev=3314.75 00:30:58.316 clat (usec): min=651, max=1128, avg=891.64, stdev=65.85 00:30:58.316 lat (usec): min=663, max=1143, avg=906.07, stdev=65.92 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 742], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 848], 00:30:58.316 | 30.00th=[ 865], 40.00th=[ 881], 50.00th=[ 898], 60.00th=[ 906], 00:30:58.316 | 70.00th=[ 922], 80.00th=[ 938], 90.00th=[ 971], 95.00th=[ 996], 00:30:58.316 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1123], 00:30:58.316 | 99.99th=[ 1123] 00:30:58.316 write: IOPS=974, BW=3896KiB/s (3990kB/s)(3900KiB/1001msec); 0 zone resets 00:30:58.316 slat (nsec): min=4072, max=45928, avg=13058.15, stdev=3500.10 00:30:58.316 clat (usec): min=261, max=929, avg=531.47, stdev=112.50 00:30:58.316 lat (usec): min=266, max=942, avg=544.53, stdev=113.24 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 293], 5.00th=[ 359], 10.00th=[ 400], 20.00th=[ 437], 00:30:58.316 | 30.00th=[ 465], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 553], 00:30:58.316 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 685], 95.00th=[ 725], 00:30:58.316 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 930], 99.95th=[ 930], 00:30:58.316 | 99.99th=[ 930] 00:30:58.316 bw ( KiB/s): min= 4096, max= 4096, per=42.21%, avg=4096.00, stdev= 0.00, samples=1 00:30:58.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:58.316 lat (usec) : 500=26.97%, 750=36.72%, 1000=34.77% 00:30:58.316 lat (msec) : 2=1.55% 00:30:58.316 cpu : usr=0.80%, sys=1.90%, ctx=1488, majf=0, minf=1 00:30:58.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 issued rwts: total=512,975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.316 job2: (groupid=0, jobs=1): err= 0: pid=3304730: Fri Dec 6 18:07:45 2024 00:30:58.316 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:30:58.316 slat (nsec): min=11756, max=32182, avg=24395.17, stdev=6865.01 00:30:58.316 clat (usec): min=40926, max=42033, avg=41543.91, stdev=497.22 00:30:58.316 lat (usec): min=40944, max=42065, avg=41568.30, stdev=497.66 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:58.316 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:30:58.316 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:58.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:58.316 | 99.99th=[42206] 00:30:58.316 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:30:58.316 slat (nsec): min=3430, max=40923, avg=12727.41, stdev=4145.50 00:30:58.316 clat (usec): min=205, max=1245, avg=543.42, stdev=113.47 00:30:58.316 lat (usec): min=208, max=1261, avg=556.15, stdev=114.66 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 302], 5.00th=[ 343], 10.00th=[ 408], 20.00th=[ 457], 00:30:58.316 | 30.00th=[ 494], 40.00th=[ 519], 50.00th=[ 537], 60.00th=[ 562], 00:30:58.316 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 717], 00:30:58.316 | 99.00th=[ 783], 99.50th=[ 848], 99.90th=[ 1254], 99.95th=[ 1254], 00:30:58.316 | 99.99th=[ 1254] 00:30:58.316 bw ( KiB/s): min= 4096, max= 4096, per=42.21%, avg=4096.00, stdev= 0.00, samples=1 00:30:58.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:58.316 lat (usec) : 250=0.19%, 500=31.32%, 750=63.21%, 1000=1.51% 00:30:58.316 lat (msec) : 2=0.38%, 50=3.40% 00:30:58.316 cpu : usr=0.29%, sys=1.16%, ctx=530, majf=0, minf=1 00:30:58.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.316 job3: (groupid=0, jobs=1): err= 0: pid=3304731: Fri Dec 6 18:07:45 2024 00:30:58.316 read: IOPS=19, BW=78.5KiB/s (80.4kB/s)(80.0KiB/1019msec) 00:30:58.316 slat (nsec): min=2961, max=29006, avg=24349.35, stdev=8706.71 00:30:58.316 clat (usec): min=957, max=42094, avg=39661.94, stdev=9120.92 00:30:58.316 lat (usec): min=960, max=42122, avg=39686.29, stdev=9125.90 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[40633], 20.00th=[41157], 00:30:58.316 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:30:58.316 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:58.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:58.316 | 99.99th=[42206] 00:30:58.316 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:30:58.316 slat (nsec): min=3525, max=47254, avg=9894.11, stdev=6024.51 00:30:58.316 clat (usec): min=203, max=861, avg=426.91, stdev=140.35 00:30:58.316 lat (usec): min=209, max=908, avg=436.81, stdev=143.90 00:30:58.316 clat percentiles (usec): 00:30:58.316 | 1.00th=[ 235], 5.00th=[ 255], 10.00th=[ 277], 20.00th=[ 297], 00:30:58.316 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 396], 60.00th=[ 449], 00:30:58.316 | 70.00th=[ 515], 80.00th=[ 553], 90.00th=[ 619], 95.00th=[ 693], 00:30:58.316 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 865], 00:30:58.316 | 99.99th=[ 865] 00:30:58.316 bw ( KiB/s): min= 4096, max= 4096, per=42.21%, avg=4096.00, stdev= 0.00, samples=1 00:30:58.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:58.316 lat (usec) : 250=3.76%, 500=59.77%, 750=30.64%, 1000=2.26% 00:30:58.316 lat (msec) : 50=3.57% 00:30:58.316 cpu : usr=0.29%, sys=0.69%, ctx=534, majf=0, minf=1 00:30:58.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.316 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.316 00:30:58.316 Run status group 0 (all jobs): 00:30:58.316 READ: bw=2307KiB/s (2363kB/s), 69.6KiB/s-2046KiB/s (71.2kB/s-2095kB/s), io=2388KiB (2445kB), run=1001-1035msec 00:30:58.316 WRITE: bw=9704KiB/s (9937kB/s), 1979KiB/s-3896KiB/s (2026kB/s-3990kB/s), io=9.81MiB (10.3MB), run=1001-1035msec 00:30:58.316 00:30:58.316 Disk stats (read/write): 00:30:58.316 nvme0n1: ios=64/512, merge=0/0, ticks=590/209, in_queue=799, util=87.78% 00:30:58.316 nvme0n2: ios=536/665, merge=0/0, ticks=1373/343, in_queue=1716, util=92.46% 00:30:58.316 nvme0n3: ios=58/512, merge=0/0, ticks=634/222, in_queue=856, util=91.35% 00:30:58.316 nvme0n4: ios=73/512, merge=0/0, ticks=1024/194, in_queue=1218, util=96.58% 00:30:58.317 18:07:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:58.317 [global] 00:30:58.317 thread=1 00:30:58.317 invalidate=1 00:30:58.317 rw=write 00:30:58.317 time_based=1 00:30:58.317 runtime=1 00:30:58.317 ioengine=libaio 00:30:58.317 direct=1 00:30:58.317 bs=4096 00:30:58.317 iodepth=128 00:30:58.317 norandommap=0 00:30:58.317 numjobs=1 00:30:58.317 00:30:58.317 verify_dump=1 00:30:58.317 verify_backlog=512 00:30:58.317 verify_state_save=0 00:30:58.317 do_verify=1 00:30:58.317 verify=crc32c-intel 00:30:58.317 [job0] 00:30:58.317 filename=/dev/nvme0n1 00:30:58.317 [job1] 00:30:58.317 filename=/dev/nvme0n2 00:30:58.317 [job2] 00:30:58.317 filename=/dev/nvme0n3 00:30:58.317 [job3] 00:30:58.317 filename=/dev/nvme0n4 00:30:58.317 Could not set queue depth (nvme0n1) 00:30:58.317 Could not set queue depth (nvme0n2) 00:30:58.317 Could not set queue depth (nvme0n3) 00:30:58.317 Could not set queue depth (nvme0n4) 00:30:58.574 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:58.574 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:58.574 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:58.574 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:58.574 fio-3.35 00:30:58.574 Starting 4 threads 00:30:59.955 00:30:59.955 job0: (groupid=0, jobs=1): err= 0: pid=3305246: Fri Dec 6 18:07:47 2024 00:30:59.955 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:30:59.955 slat (nsec): min=943, max=16161k, avg=110270.89, stdev=827311.79 00:30:59.955 clat (usec): min=3165, max=65760, avg=12574.32, stdev=6958.73 00:30:59.955 lat (usec): min=3182, max=65767, avg=12684.59, stdev=7063.96 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7570], 00:30:59.955 | 30.00th=[ 8717], 40.00th=[10028], 50.00th=[10814], 60.00th=[12125], 00:30:59.955 | 70.00th=[13566], 80.00th=[16450], 90.00th=[20055], 95.00th=[25035], 00:30:59.955 | 99.00th=[38011], 99.50th=[49546], 99.90th=[65799], 99.95th=[65799], 00:30:59.955 | 99.99th=[65799] 00:30:59.955 write: IOPS=3922, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1005msec); 0 zone resets 00:30:59.955 slat (nsec): min=1623, max=10035k, avg=144670.14, stdev=740501.74 00:30:59.955 clat (usec): min=1289, max=98693, avg=20923.70, stdev=18840.70 00:30:59.955 lat (usec): min=1300, max=98701, avg=21068.37, stdev=18960.78 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 4015], 5.00th=[ 4424], 10.00th=[ 5735], 20.00th=[ 6456], 00:30:59.955 | 30.00th=[ 7767], 40.00th=[10028], 50.00th=[12649], 60.00th=[19530], 00:30:59.955 | 70.00th=[26346], 80.00th=[32375], 90.00th=[48497], 95.00th=[58983], 00:30:59.955 | 99.00th=[87557], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:30:59.955 | 99.99th=[99091] 00:30:59.955 bw ( KiB/s): min=11584, max=18928, per=18.16%, avg=15256.00, stdev=5192.99, samples=2 00:30:59.955 iops : min= 2896, max= 4732, avg=3814.00, stdev=1298.25, samples=2 00:30:59.955 lat (msec) : 2=0.12%, 4=0.64%, 10=38.81%, 20=35.22%, 50=19.98% 00:30:59.955 lat (msec) : 100=5.22% 00:30:59.955 cpu : usr=1.99%, sys=4.08%, ctx=360, majf=0, minf=1 00:30:59.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:59.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:59.955 issued rwts: total=3584,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:59.955 job1: (groupid=0, jobs=1): err= 0: pid=3305247: Fri Dec 6 18:07:47 2024 00:30:59.955 read: IOPS=8996, BW=35.1MiB/s (36.8MB/s)(35.2MiB/1002msec) 00:30:59.955 slat (nsec): min=891, max=20839k, avg=55854.38, stdev=451369.06 00:30:59.955 clat (usec): min=1159, max=42451, avg=7396.08, stdev=3788.63 00:30:59.955 lat (usec): min=1633, max=42478, avg=7451.94, stdev=3826.15 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 2769], 5.00th=[ 4113], 10.00th=[ 4621], 20.00th=[ 5407], 00:30:59.955 | 30.00th=[ 5932], 40.00th=[ 6325], 50.00th=[ 6521], 60.00th=[ 6915], 00:30:59.955 | 70.00th=[ 7308], 80.00th=[ 8225], 90.00th=[10814], 95.00th=[12649], 00:30:59.955 | 99.00th=[27657], 99.50th=[32113], 99.90th=[36963], 99.95th=[36963], 00:30:59.955 | 99.99th=[42206] 00:30:59.955 write: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(36.0MiB/1002msec); 0 zone resets 00:30:59.955 slat (nsec): min=1544, max=7356.3k, avg=47213.43, stdev=333931.64 00:30:59.955 clat (usec): min=666, max=28050, avg=6564.56, stdev=2670.91 00:30:59.955 lat (usec): min=766, max=28054, avg=6611.78, stdev=2694.69 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 1942], 5.00th=[ 3326], 10.00th=[ 4080], 20.00th=[ 5014], 00:30:59.955 | 30.00th=[ 5473], 40.00th=[ 5735], 50.00th=[ 6063], 60.00th=[ 6521], 00:30:59.955 | 70.00th=[ 6915], 80.00th=[ 7701], 90.00th=[ 9503], 95.00th=[11731], 00:30:59.955 | 99.00th=[18744], 99.50th=[20579], 99.90th=[21890], 99.95th=[21890], 00:30:59.955 | 99.99th=[28181] 00:30:59.955 bw ( KiB/s): min=32376, max=41352, per=43.87%, avg=36864.00, stdev=6346.99, samples=2 00:30:59.955 iops : min= 8094, max=10338, avg=9216.00, stdev=1586.75, samples=2 00:30:59.955 lat (usec) : 750=0.01%, 1000=0.02% 00:30:59.955 lat (msec) : 2=0.71%, 4=5.92%, 10=83.16%, 20=8.85%, 50=1.33% 00:30:59.955 cpu : usr=3.90%, sys=7.29%, ctx=756, majf=0, minf=2 00:30:59.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:30:59.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:59.955 issued rwts: total=9014,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:59.955 job2: (groupid=0, jobs=1): err= 0: pid=3305248: Fri Dec 6 18:07:47 2024 00:30:59.955 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:30:59.955 slat (nsec): min=974, max=15570k, avg=132019.49, stdev=958499.76 00:30:59.955 clat (usec): min=4789, max=44625, avg=16729.94, stdev=7157.53 00:30:59.955 lat (usec): min=4798, max=44631, avg=16861.96, stdev=7227.31 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[10421], 20.00th=[10945], 00:30:59.955 | 30.00th=[11863], 40.00th=[13173], 50.00th=[14353], 60.00th=[15926], 00:30:59.955 | 70.00th=[19530], 80.00th=[21627], 90.00th=[27132], 95.00th=[32375], 00:30:59.955 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40633], 99.95th=[41681], 00:30:59.955 | 99.99th=[44827] 00:30:59.955 write: IOPS=3579, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1006msec); 0 zone resets 00:30:59.955 slat (nsec): min=1655, max=16537k, avg=140399.32, stdev=963867.83 00:30:59.955 clat (usec): min=897, max=63407, avg=18773.35, stdev=10695.88 00:30:59.955 lat (usec): min=907, max=63412, avg=18913.75, stdev=10769.56 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 1942], 5.00th=[ 6194], 10.00th=[ 8979], 20.00th=[10159], 00:30:59.955 | 30.00th=[11338], 40.00th=[15008], 50.00th=[17171], 60.00th=[20317], 00:30:59.955 | 70.00th=[22676], 80.00th=[24249], 90.00th=[30016], 95.00th=[35914], 00:30:59.955 | 99.00th=[61604], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:30:59.955 | 99.99th=[63177] 00:30:59.955 bw ( KiB/s): min=11112, max=17560, per=17.06%, avg=14336.00, stdev=4559.42, samples=2 00:30:59.955 iops : min= 2778, max= 4390, avg=3584.00, stdev=1139.86, samples=2 00:30:59.955 lat (usec) : 1000=0.11% 00:30:59.955 lat (msec) : 2=0.45%, 4=0.95%, 10=13.10%, 20=50.74%, 50=33.22% 00:30:59.955 lat (msec) : 100=1.43% 00:30:59.955 cpu : usr=2.19%, sys=3.78%, ctx=235, majf=0, minf=1 00:30:59.955 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:59.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:59.955 issued rwts: total=3584,3601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.955 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:59.955 job3: (groupid=0, jobs=1): err= 0: pid=3305249: Fri Dec 6 18:07:47 2024 00:30:59.955 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:30:59.955 slat (nsec): min=939, max=14368k, avg=92661.88, stdev=703679.30 00:30:59.955 clat (usec): min=1231, max=42556, avg=12150.23, stdev=5731.09 00:30:59.955 lat (usec): min=1240, max=42568, avg=12242.89, stdev=5779.88 00:30:59.955 clat percentiles (usec): 00:30:59.955 | 1.00th=[ 3097], 5.00th=[ 4817], 10.00th=[ 6521], 20.00th=[ 8586], 00:30:59.955 | 30.00th=[ 9110], 40.00th=[10028], 50.00th=[11076], 60.00th=[11731], 00:30:59.955 | 70.00th=[12911], 80.00th=[16057], 90.00th=[19268], 95.00th=[23987], 00:30:59.955 | 99.00th=[33817], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:30:59.955 | 99.99th=[42730] 00:30:59.956 write: IOPS=4348, BW=17.0MiB/s (17.8MB/s)(17.1MiB/1006msec); 0 zone resets 00:30:59.956 slat (nsec): min=1566, max=12363k, avg=125099.95, stdev=803479.90 00:30:59.956 clat (usec): min=396, max=102468, avg=17796.71, stdev=17413.10 00:30:59.956 lat (usec): min=429, max=102476, avg=17921.81, stdev=17524.72 00:30:59.956 clat percentiles (usec): 00:30:59.956 | 1.00th=[ 1172], 5.00th=[ 1827], 10.00th=[ 3032], 20.00th=[ 6587], 00:30:59.956 | 30.00th=[ 7832], 40.00th=[ 9110], 50.00th=[ 11338], 60.00th=[ 13566], 00:30:59.956 | 70.00th=[ 20055], 80.00th=[ 29230], 90.00th=[ 38536], 95.00th=[ 51119], 00:30:59.956 | 99.00th=[ 92799], 99.50th=[ 98042], 99.90th=[102237], 99.95th=[102237], 00:30:59.956 | 99.99th=[102237] 00:30:59.956 bw ( KiB/s): min=16248, max=17728, per=20.22%, avg=16988.00, stdev=1046.52, samples=2 00:30:59.956 iops : min= 4062, max= 4432, avg=4247.00, stdev=261.63, samples=2 00:30:59.956 lat (usec) : 500=0.02%, 750=0.08%, 1000=0.32% 00:30:59.956 lat (msec) : 2=2.43%, 4=6.24%, 10=33.05%, 20=37.37%, 50=17.77% 00:30:59.956 lat (msec) : 100=2.63%, 250=0.07% 00:30:59.956 cpu : usr=2.49%, sys=3.48%, ctx=397, majf=0, minf=1 00:30:59.956 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:30:59.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:59.956 issued rwts: total=4096,4375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.956 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:59.956 00:30:59.956 Run status group 0 (all jobs): 00:30:59.956 READ: bw=78.7MiB/s (82.6MB/s), 13.9MiB/s-35.1MiB/s (14.6MB/s-36.8MB/s), io=79.2MiB (83.1MB), run=1002-1006msec 00:30:59.956 WRITE: bw=82.1MiB/s (86.0MB/s), 14.0MiB/s-35.9MiB/s (14.7MB/s-37.7MB/s), io=82.6MiB (86.6MB), run=1002-1006msec 00:30:59.956 00:30:59.956 Disk stats (read/write): 00:30:59.956 nvme0n1: ios=2735/3072, merge=0/0, ticks=33149/69111, in_queue=102260, util=87.78% 00:30:59.956 nvme0n2: ios=7793/8192, merge=0/0, ticks=35666/33986, in_queue=69652, util=91.64% 00:30:59.956 nvme0n3: ios=3072/3351, merge=0/0, ticks=31328/35574, in_queue=66902, util=88.29% 00:30:59.956 nvme0n4: ios=3244/3584, merge=0/0, ticks=34139/62845, in_queue=96984, util=95.19% 00:30:59.956 18:07:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:59.956 [global] 00:30:59.956 thread=1 00:30:59.956 invalidate=1 00:30:59.956 rw=randwrite 00:30:59.956 time_based=1 00:30:59.956 runtime=1 00:30:59.956 ioengine=libaio 00:30:59.956 direct=1 00:30:59.956 bs=4096 00:30:59.956 iodepth=128 00:30:59.956 norandommap=0 00:30:59.956 numjobs=1 00:30:59.956 00:30:59.956 verify_dump=1 00:30:59.956 verify_backlog=512 00:30:59.956 verify_state_save=0 00:30:59.956 do_verify=1 00:30:59.956 verify=crc32c-intel 00:30:59.956 [job0] 00:30:59.956 filename=/dev/nvme0n1 00:30:59.956 [job1] 00:30:59.956 filename=/dev/nvme0n2 00:30:59.956 [job2] 00:30:59.956 filename=/dev/nvme0n3 00:30:59.956 [job3] 00:30:59.956 filename=/dev/nvme0n4 00:30:59.956 Could not set queue depth (nvme0n1) 00:30:59.956 Could not set queue depth (nvme0n2) 00:30:59.956 Could not set queue depth (nvme0n3) 00:30:59.956 Could not set queue depth (nvme0n4) 00:31:00.216 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.216 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.216 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.216 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.216 fio-3.35 00:31:00.216 Starting 4 threads 00:31:01.618 00:31:01.618 job0: (groupid=0, jobs=1): err= 0: pid=3305776: Fri Dec 6 18:07:49 2024 00:31:01.618 read: IOPS=4472, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1006msec) 00:31:01.618 slat (nsec): min=892, max=19383k, avg=126156.27, stdev=939316.78 00:31:01.618 clat (usec): min=3372, max=44536, avg=16149.41, stdev=7859.14 00:31:01.618 lat (usec): min=3375, max=44704, avg=16275.56, stdev=7933.40 00:31:01.618 clat percentiles (usec): 00:31:01.618 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[10945], 00:31:01.618 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13960], 60.00th=[15401], 00:31:01.618 | 70.00th=[17695], 80.00th=[22938], 90.00th=[28443], 95.00th=[32113], 00:31:01.618 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[42206], 00:31:01.618 | 99.99th=[44303] 00:31:01.618 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:31:01.618 slat (nsec): min=1483, max=18283k, avg=90522.76, stdev=707698.18 00:31:01.618 clat (usec): min=1227, max=49384, avg=11924.10, stdev=6284.45 00:31:01.618 lat (usec): min=1239, max=51898, avg=12014.62, stdev=6350.73 00:31:01.618 clat percentiles (usec): 00:31:01.618 | 1.00th=[ 3392], 5.00th=[ 5145], 10.00th=[ 6128], 20.00th=[ 8455], 00:31:01.618 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[11207], 00:31:01.618 | 70.00th=[11994], 80.00th=[14353], 90.00th=[19792], 95.00th=[23987], 00:31:01.618 | 99.00th=[36439], 99.50th=[45876], 99.90th=[49546], 99.95th=[49546], 00:31:01.618 | 99.99th=[49546] 00:31:01.618 bw ( KiB/s): min=16384, max=20480, per=21.78%, avg=18432.00, stdev=2896.31, samples=2 00:31:01.618 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:01.618 lat (msec) : 2=0.09%, 4=1.02%, 10=29.53%, 20=52.45%, 50=16.91% 00:31:01.618 cpu : usr=2.19%, sys=2.19%, ctx=318, majf=0, minf=1 00:31:01.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:01.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.618 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.618 issued rwts: total=4499,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.618 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.618 job1: (groupid=0, jobs=1): err= 0: pid=3305777: Fri Dec 6 18:07:49 2024 00:31:01.618 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(40.0MiB/1003msec) 00:31:01.618 slat (nsec): min=962, max=10721k, avg=49934.36, stdev=414411.01 00:31:01.618 clat (usec): min=2314, max=23602, avg=6467.45, stdev=2531.44 00:31:01.618 lat (usec): min=2319, max=23611, avg=6517.39, stdev=2558.07 00:31:01.618 clat percentiles (usec): 00:31:01.618 | 1.00th=[ 3392], 5.00th=[ 4293], 10.00th=[ 4621], 20.00th=[ 4948], 00:31:01.618 | 30.00th=[ 5145], 40.00th=[ 5342], 50.00th=[ 5538], 60.00th=[ 5932], 00:31:01.618 | 70.00th=[ 6783], 80.00th=[ 7832], 90.00th=[ 8848], 95.00th=[11600], 00:31:01.618 | 99.00th=[17433], 99.50th=[18220], 99.90th=[20317], 99.95th=[20317], 00:31:01.618 | 99.99th=[23200] 00:31:01.618 write: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(40.4MiB/1003msec); 0 zone resets 00:31:01.618 slat (nsec): min=1618, max=14953k, avg=43752.21, stdev=355826.36 00:31:01.618 clat (usec): min=768, max=25224, avg=5853.59, stdev=2887.16 00:31:01.618 lat (usec): min=777, max=25242, avg=5897.34, stdev=2904.58 00:31:01.618 clat percentiles (usec): 00:31:01.618 | 1.00th=[ 2442], 5.00th=[ 3261], 10.00th=[ 3556], 20.00th=[ 4178], 00:31:01.618 | 30.00th=[ 4752], 40.00th=[ 5145], 50.00th=[ 5473], 60.00th=[ 5604], 00:31:01.618 | 70.00th=[ 5735], 80.00th=[ 5997], 90.00th=[ 8029], 95.00th=[10552], 00:31:01.618 | 99.00th=[20055], 99.50th=[20579], 99.90th=[22414], 99.95th=[22676], 00:31:01.618 | 99.99th=[22676] 00:31:01.618 bw ( KiB/s): min=37344, max=44624, per=48.44%, avg=40984.00, stdev=5147.74, samples=2 00:31:01.618 iops : min= 9336, max=11156, avg=10246.00, stdev=1286.93, samples=2 00:31:01.618 lat (usec) : 1000=0.02% 00:31:01.618 lat (msec) : 2=0.23%, 4=9.70%, 10=83.59%, 20=5.53%, 50=0.93% 00:31:01.618 cpu : usr=3.89%, sys=5.39%, ctx=734, majf=0, minf=1 00:31:01.618 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:01.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.619 issued rwts: total=10240,10351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.619 job2: (groupid=0, jobs=1): err= 0: pid=3305778: Fri Dec 6 18:07:49 2024 00:31:01.619 read: IOPS=3220, BW=12.6MiB/s (13.2MB/s)(13.2MiB/1046msec) 00:31:01.619 slat (nsec): min=961, max=11182k, avg=127905.41, stdev=828357.18 00:31:01.619 clat (usec): min=5100, max=56119, avg=15914.60, stdev=10484.97 00:31:01.619 lat (usec): min=5102, max=56121, avg=16042.51, stdev=10527.26 00:31:01.619 clat percentiles (usec): 00:31:01.619 | 1.00th=[ 5866], 5.00th=[ 7308], 10.00th=[ 8356], 20.00th=[ 9896], 00:31:01.619 | 30.00th=[10552], 40.00th=[11469], 50.00th=[12649], 60.00th=[13435], 00:31:01.619 | 70.00th=[15139], 80.00th=[18744], 90.00th=[28705], 95.00th=[42206], 00:31:01.619 | 99.00th=[55837], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:31:01.619 | 99.99th=[56361] 00:31:01.619 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:31:01.619 slat (nsec): min=1613, max=13517k, avg=154654.17, stdev=724856.31 00:31:01.619 clat (usec): min=1179, max=50553, avg=22061.77, stdev=13081.87 00:31:01.619 lat (usec): min=1190, max=50559, avg=22216.42, stdev=13168.87 00:31:01.619 clat percentiles (usec): 00:31:01.619 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 6587], 20.00th=[ 9241], 00:31:01.619 | 30.00th=[10683], 40.00th=[12518], 50.00th=[21103], 60.00th=[27395], 00:31:01.619 | 70.00th=[32113], 80.00th=[35914], 90.00th=[40633], 95.00th=[42206], 00:31:01.619 | 99.00th=[45351], 99.50th=[45351], 99.90th=[46400], 99.95th=[50594], 00:31:01.619 | 99.99th=[50594] 00:31:01.619 bw ( KiB/s): min=12720, max=15952, per=16.94%, avg=14336.00, stdev=2285.37, samples=2 00:31:01.619 iops : min= 3180, max= 3988, avg=3584.00, stdev=571.34, samples=2 00:31:01.619 lat (msec) : 2=0.13%, 4=0.09%, 10=22.45%, 20=42.44%, 50=33.12% 00:31:01.619 lat (msec) : 100=1.77% 00:31:01.619 cpu : usr=2.58%, sys=2.49%, ctx=351, majf=0, minf=2 00:31:01.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:01.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.619 issued rwts: total=3369,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.619 job3: (groupid=0, jobs=1): err= 0: pid=3305779: Fri Dec 6 18:07:49 2024 00:31:01.619 read: IOPS=3221, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1006msec) 00:31:01.619 slat (nsec): min=953, max=16076k, avg=162154.82, stdev=1057336.26 00:31:01.619 clat (usec): min=1258, max=54499, avg=20383.66, stdev=9587.82 00:31:01.619 lat (usec): min=4621, max=58570, avg=20545.82, stdev=9676.32 00:31:01.619 clat percentiles (usec): 00:31:01.619 | 1.00th=[ 5014], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[13173], 00:31:01.619 | 30.00th=[16057], 40.00th=[18220], 50.00th=[19268], 60.00th=[20841], 00:31:01.619 | 70.00th=[22414], 80.00th=[25297], 90.00th=[35914], 95.00th=[40633], 00:31:01.619 | 99.00th=[52167], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:31:01.619 | 99.99th=[54264] 00:31:01.619 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:31:01.619 slat (nsec): min=1531, max=13738k, avg=128130.67, stdev=882599.11 00:31:01.619 clat (usec): min=2652, max=54482, avg=16977.08, stdev=9137.59 00:31:01.619 lat (usec): min=2657, max=54492, avg=17105.21, stdev=9225.87 00:31:01.619 clat percentiles (usec): 00:31:01.619 | 1.00th=[ 4228], 5.00th=[ 5604], 10.00th=[ 6718], 20.00th=[ 8717], 00:31:01.619 | 30.00th=[10683], 40.00th=[14484], 50.00th=[16450], 60.00th=[16909], 00:31:01.619 | 70.00th=[17171], 80.00th=[23987], 90.00th=[30540], 95.00th=[35914], 00:31:01.619 | 99.00th=[43254], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:31:01.619 | 99.99th=[54264] 00:31:01.619 bw ( KiB/s): min=12288, max=16384, per=16.94%, avg=14336.00, stdev=2896.31, samples=2 00:31:01.619 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:31:01.619 lat (msec) : 2=0.01%, 4=0.35%, 10=20.69%, 20=44.76%, 50=33.63% 00:31:01.619 lat (msec) : 100=0.56% 00:31:01.619 cpu : usr=2.29%, sys=3.78%, ctx=225, majf=0, minf=1 00:31:01.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:01.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.619 issued rwts: total=3241,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.619 00:31:01.619 Run status group 0 (all jobs): 00:31:01.619 READ: bw=79.7MiB/s (83.6MB/s), 12.6MiB/s-39.9MiB/s (13.2MB/s-41.8MB/s), io=83.4MiB (87.4MB), run=1003-1046msec 00:31:01.619 WRITE: bw=82.6MiB/s (86.6MB/s), 13.4MiB/s-40.3MiB/s (14.0MB/s-42.3MB/s), io=86.4MiB (90.6MB), run=1003-1046msec 00:31:01.619 00:31:01.619 Disk stats (read/write): 00:31:01.619 nvme0n1: ios=4019/4096, merge=0/0, ticks=31083/21819, in_queue=52902, util=88.98% 00:31:01.619 nvme0n2: ios=8492/8704, merge=0/0, ticks=53288/48353, in_queue=101641, util=96.45% 00:31:01.619 nvme0n3: ios=2577/2847, merge=0/0, ticks=37697/66790, in_queue=104487, util=92.72% 00:31:01.619 nvme0n4: ios=2917/3072, merge=0/0, ticks=20676/16974, in_queue=37650, util=89.72% 00:31:01.619 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:01.619 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3306107 00:31:01.619 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:01.619 18:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:01.619 [global] 00:31:01.619 thread=1 00:31:01.619 invalidate=1 00:31:01.619 rw=read 00:31:01.619 time_based=1 00:31:01.619 runtime=10 00:31:01.619 ioengine=libaio 00:31:01.619 direct=1 00:31:01.619 bs=4096 00:31:01.619 iodepth=1 00:31:01.619 norandommap=1 00:31:01.619 numjobs=1 00:31:01.619 00:31:01.619 [job0] 00:31:01.619 filename=/dev/nvme0n1 00:31:01.619 [job1] 00:31:01.619 filename=/dev/nvme0n2 00:31:01.619 [job2] 00:31:01.619 filename=/dev/nvme0n3 00:31:01.619 [job3] 00:31:01.619 filename=/dev/nvme0n4 00:31:01.619 Could not set queue depth (nvme0n1) 00:31:01.619 Could not set queue depth (nvme0n2) 00:31:01.619 Could not set queue depth (nvme0n3) 00:31:01.619 Could not set queue depth (nvme0n4) 00:31:01.879 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:01.879 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:01.879 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:01.879 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:01.879 fio-3.35 00:31:01.879 Starting 4 threads 00:31:04.415 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:04.675 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10563584, buflen=4096 00:31:04.675 fio: pid=3306296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:04.675 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:04.935 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:04.935 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:04.935 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=15421440, buflen=4096 00:31:04.935 fio: pid=3306295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:04.935 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:04.935 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:04.935 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10641408, buflen=4096 00:31:04.935 fio: pid=3306293, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:05.195 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:05.195 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:05.195 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=4849664, buflen=4096 00:31:05.195 fio: pid=3306294, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:31:05.195 00:31:05.195 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3306293: Fri Dec 6 18:07:52 2024 00:31:05.195 read: IOPS=859, BW=3438KiB/s (3520kB/s)(10.1MiB/3023msec) 00:31:05.195 slat (usec): min=2, max=24285, avg=36.46, stdev=581.07 00:31:05.195 clat (usec): min=315, max=5820, avg=1114.86, stdev=205.45 00:31:05.195 lat (usec): min=321, max=25453, avg=1151.32, stdev=613.47 00:31:05.195 clat percentiles (usec): 00:31:05.195 | 1.00th=[ 523], 5.00th=[ 766], 10.00th=[ 865], 20.00th=[ 971], 00:31:05.195 | 30.00th=[ 1057], 40.00th=[ 1106], 50.00th=[ 1156], 60.00th=[ 1188], 00:31:05.195 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1303], 95.00th=[ 1352], 00:31:05.195 | 99.00th=[ 1450], 99.50th=[ 1483], 99.90th=[ 1565], 99.95th=[ 1614], 00:31:05.195 | 99.99th=[ 5800] 00:31:05.195 bw ( KiB/s): min= 3288, max= 3888, per=27.72%, avg=3515.20, stdev=256.10, samples=5 00:31:05.195 iops : min= 822, max= 972, avg=878.80, stdev=64.02, samples=5 00:31:05.195 lat (usec) : 500=0.85%, 750=3.73%, 1000=18.08% 00:31:05.195 lat (msec) : 2=77.26%, 10=0.04% 00:31:05.195 cpu : usr=0.76%, sys=1.36%, ctx=2603, majf=0, minf=1 00:31:05.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.195 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3306294: Fri Dec 6 18:07:52 2024 00:31:05.195 read: IOPS=371, BW=1483KiB/s (1518kB/s)(4736KiB/3194msec) 00:31:05.195 slat (usec): min=2, max=14912, avg=65.39, stdev=744.14 00:31:05.195 clat (usec): min=391, max=42194, avg=2610.35, stdev=8085.30 00:31:05.195 lat (usec): min=402, max=42213, avg=2670.30, stdev=8110.47 00:31:05.195 clat percentiles (usec): 00:31:05.195 | 1.00th=[ 523], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 709], 00:31:05.195 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 848], 60.00th=[ 1057], 00:31:05.195 | 70.00th=[ 1156], 80.00th=[ 1221], 90.00th=[ 1303], 95.00th=[ 1385], 00:31:05.195 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:31:05.195 | 99.99th=[42206] 00:31:05.195 bw ( KiB/s): min= 600, max= 3129, per=11.39%, avg=1445.50, stdev=940.28, samples=6 00:31:05.195 iops : min= 150, max= 782, avg=361.33, stdev=234.98, samples=6 00:31:05.195 lat (usec) : 500=0.51%, 750=28.61%, 1000=27.09% 00:31:05.195 lat (msec) : 2=39.58%, 50=4.14% 00:31:05.195 cpu : usr=0.38%, sys=0.60%, ctx=1193, majf=0, minf=2 00:31:05.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 issued rwts: total=1185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.195 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3306295: Fri Dec 6 18:07:52 2024 00:31:05.195 read: IOPS=1317, BW=5269KiB/s (5396kB/s)(14.7MiB/2858msec) 00:31:05.195 slat (nsec): min=2603, max=55981, avg=11942.07, stdev=6645.18 00:31:05.195 clat (usec): min=147, max=1055, avg=738.59, stdev=86.56 00:31:05.195 lat (usec): min=158, max=1081, avg=750.53, stdev=88.22 00:31:05.195 clat percentiles (usec): 00:31:05.195 | 1.00th=[ 494], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 676], 00:31:05.195 | 30.00th=[ 709], 40.00th=[ 734], 50.00th=[ 750], 60.00th=[ 766], 00:31:05.195 | 70.00th=[ 783], 80.00th=[ 807], 90.00th=[ 832], 95.00th=[ 865], 00:31:05.195 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1045], 00:31:05.195 | 99.99th=[ 1057] 00:31:05.195 bw ( KiB/s): min= 5056, max= 5448, per=41.79%, avg=5300.80, stdev=159.16, samples=5 00:31:05.195 iops : min= 1264, max= 1362, avg=1325.20, stdev=39.79, samples=5 00:31:05.195 lat (usec) : 250=0.03%, 500=1.17%, 750=49.95%, 1000=48.62% 00:31:05.195 lat (msec) : 2=0.21% 00:31:05.195 cpu : usr=0.77%, sys=1.44%, ctx=3766, majf=0, minf=2 00:31:05.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 issued rwts: total=3766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.195 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3306296: Fri Dec 6 18:07:52 2024 00:31:05.195 read: IOPS=963, BW=3852KiB/s (3945kB/s)(10.1MiB/2678msec) 00:31:05.195 slat (nsec): min=2578, max=57872, avg=16403.28, stdev=6108.69 00:31:05.195 clat (usec): min=453, max=41350, avg=1014.41, stdev=1937.25 00:31:05.195 lat (usec): min=468, max=41376, avg=1030.82, stdev=1937.66 00:31:05.195 clat percentiles (usec): 00:31:05.195 | 1.00th=[ 619], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 832], 00:31:05.195 | 30.00th=[ 881], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 963], 00:31:05.195 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1045], 95.00th=[ 1074], 00:31:05.195 | 99.00th=[ 1139], 99.50th=[ 1205], 99.90th=[41157], 99.95th=[41157], 00:31:05.195 | 99.99th=[41157] 00:31:05.195 bw ( KiB/s): min= 2168, max= 4448, per=30.41%, avg=3857.60, stdev=951.01, samples=5 00:31:05.195 iops : min= 542, max= 1112, avg=964.40, stdev=237.75, samples=5 00:31:05.195 lat (usec) : 500=0.23%, 750=7.44%, 1000=70.04% 00:31:05.195 lat (msec) : 2=21.98%, 4=0.04%, 50=0.23% 00:31:05.195 cpu : usr=1.49%, sys=2.13%, ctx=2580, majf=0, minf=2 00:31:05.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.195 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.195 00:31:05.195 Run status group 0 (all jobs): 00:31:05.195 READ: bw=12.4MiB/s (13.0MB/s), 1483KiB/s-5269KiB/s (1518kB/s-5396kB/s), io=39.6MiB (41.5MB), run=2678-3194msec 00:31:05.195 00:31:05.195 Disk stats (read/write): 00:31:05.195 nvme0n1: ios=2480/0, merge=0/0, ticks=2715/0, in_queue=2715, util=94.76% 00:31:05.195 nvme0n2: ios=1164/0, merge=0/0, ticks=3022/0, in_queue=3022, util=94.53% 00:31:05.195 nvme0n3: ios=3766/0, merge=0/0, ticks=2724/0, in_queue=2724, util=96.15% 00:31:05.195 nvme0n4: ios=2514/0, merge=0/0, ticks=2372/0, in_queue=2372, util=96.45% 00:31:05.195 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:05.195 18:07:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:05.455 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:05.455 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:05.715 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:05.715 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:05.715 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:05.715 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3306107 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:05.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:05.974 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:05.974 nvmf hotplug test: fio failed as expected 00:31:05.975 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:06.235 rmmod nvme_tcp 00:31:06.235 rmmod nvme_fabrics 00:31:06.235 rmmod nvme_keyring 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3302628 ']' 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3302628 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3302628 ']' 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3302628 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3302628 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3302628' 00:31:06.235 killing process with pid 3302628 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3302628 00:31:06.235 18:07:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3302628 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.495 18:07:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:08.396 00:31:08.396 real 0m25.146s 00:31:08.396 user 2m5.826s 00:31:08.396 sys 0m9.778s 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:08.396 ************************************ 00:31:08.396 END TEST nvmf_fio_target 00:31:08.396 ************************************ 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:08.396 ************************************ 00:31:08.396 START TEST nvmf_bdevio 00:31:08.396 ************************************ 00:31:08.396 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:08.656 * Looking for test storage... 00:31:08.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:08.656 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:08.656 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:08.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.657 --rc genhtml_branch_coverage=1 00:31:08.657 --rc genhtml_function_coverage=1 00:31:08.657 --rc genhtml_legend=1 00:31:08.657 --rc geninfo_all_blocks=1 00:31:08.657 --rc geninfo_unexecuted_blocks=1 00:31:08.657 00:31:08.657 ' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:08.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.657 --rc genhtml_branch_coverage=1 00:31:08.657 --rc genhtml_function_coverage=1 00:31:08.657 --rc genhtml_legend=1 00:31:08.657 --rc geninfo_all_blocks=1 00:31:08.657 --rc geninfo_unexecuted_blocks=1 00:31:08.657 00:31:08.657 ' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:08.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.657 --rc genhtml_branch_coverage=1 00:31:08.657 --rc genhtml_function_coverage=1 00:31:08.657 --rc genhtml_legend=1 00:31:08.657 --rc geninfo_all_blocks=1 00:31:08.657 --rc geninfo_unexecuted_blocks=1 00:31:08.657 00:31:08.657 ' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:08.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:08.657 --rc genhtml_branch_coverage=1 00:31:08.657 --rc genhtml_function_coverage=1 00:31:08.657 --rc genhtml_legend=1 00:31:08.657 --rc geninfo_all_blocks=1 00:31:08.657 --rc geninfo_unexecuted_blocks=1 00:31:08.657 00:31:08.657 ' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.657 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:08.658 18:07:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:13.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.931 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:13.932 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:13.932 Found net devices under 0000:31:00.0: cvl_0_0 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:13.932 Found net devices under 0000:31:00.1: cvl_0_1 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:31:13.932 00:31:13.932 --- 10.0.0.2 ping statistics --- 00:31:13.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.932 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:31:13.932 00:31:13.932 --- 10.0.0.1 ping statistics --- 00:31:13.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.932 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:13.932 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3311652 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3311652 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3311652 ']' 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:13.933 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:13.933 [2024-12-06 18:08:01.708814] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:13.933 [2024-12-06 18:08:01.709802] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:31:13.933 [2024-12-06 18:08:01.709840] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.193 [2024-12-06 18:08:01.781174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:14.193 [2024-12-06 18:08:01.809815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.193 [2024-12-06 18:08:01.809840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.193 [2024-12-06 18:08:01.809848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.193 [2024-12-06 18:08:01.809854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.193 [2024-12-06 18:08:01.809858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.193 [2024-12-06 18:08:01.811327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:14.193 [2024-12-06 18:08:01.811483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:14.193 [2024-12-06 18:08:01.811633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.193 [2024-12-06 18:08:01.811635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:14.193 [2024-12-06 18:08:01.861426] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.193 [2024-12-06 18:08:01.862470] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:14.193 [2024-12-06 18:08:01.863275] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.193 [2024-12-06 18:08:01.863458] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:14.193 [2024-12-06 18:08:01.863464] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.193 [2024-12-06 18:08:01.912427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.193 Malloc0 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:14.193 [2024-12-06 18:08:01.968212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:14.193 { 00:31:14.193 "params": { 00:31:14.193 "name": "Nvme$subsystem", 00:31:14.193 "trtype": "$TEST_TRANSPORT", 00:31:14.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.193 "adrfam": "ipv4", 00:31:14.193 "trsvcid": "$NVMF_PORT", 00:31:14.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.193 "hdgst": ${hdgst:-false}, 00:31:14.193 "ddgst": ${ddgst:-false} 00:31:14.193 }, 00:31:14.193 "method": "bdev_nvme_attach_controller" 00:31:14.193 } 00:31:14.193 EOF 00:31:14.193 )") 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:14.193 18:08:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:14.193 "params": { 00:31:14.193 "name": "Nvme1", 00:31:14.193 "trtype": "tcp", 00:31:14.193 "traddr": "10.0.0.2", 00:31:14.193 "adrfam": "ipv4", 00:31:14.193 "trsvcid": "4420", 00:31:14.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.194 "hdgst": false, 00:31:14.194 "ddgst": false 00:31:14.194 }, 00:31:14.194 "method": "bdev_nvme_attach_controller" 00:31:14.194 }' 00:31:14.194 [2024-12-06 18:08:02.005121] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:31:14.194 [2024-12-06 18:08:02.005169] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311679 ] 00:31:14.452 [2024-12-06 18:08:02.072226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:14.452 [2024-12-06 18:08:02.105597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.452 [2024-12-06 18:08:02.105748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.452 [2024-12-06 18:08:02.105749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.711 I/O targets: 00:31:14.711 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:14.711 00:31:14.711 00:31:14.711 CUnit - A unit testing framework for C - Version 2.1-3 00:31:14.711 http://cunit.sourceforge.net/ 00:31:14.711 00:31:14.711 00:31:14.711 Suite: bdevio tests on: Nvme1n1 00:31:14.711 Test: blockdev write read block ...passed 00:31:14.711 Test: blockdev write zeroes read block ...passed 00:31:14.711 Test: blockdev write zeroes read no split ...passed 00:31:14.711 Test: blockdev write zeroes read split ...passed 00:31:14.711 Test: blockdev write zeroes read split partial ...passed 00:31:14.711 Test: blockdev reset ...[2024-12-06 18:08:02.521653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:14.711 [2024-12-06 18:08:02.521706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a70e0 (9): Bad file descriptor 00:31:14.969 [2024-12-06 18:08:02.616097] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:14.969 passed 00:31:14.969 Test: blockdev write read 8 blocks ...passed 00:31:14.969 Test: blockdev write read size > 128k ...passed 00:31:14.969 Test: blockdev write read invalid size ...passed 00:31:14.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:14.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:14.969 Test: blockdev write read max offset ...passed 00:31:14.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:14.969 Test: blockdev writev readv 8 blocks ...passed 00:31:14.969 Test: blockdev writev readv 30 x 1block ...passed 00:31:14.969 Test: blockdev writev readv block ...passed 00:31:14.969 Test: blockdev writev readv size > 128k ...passed 00:31:14.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:15.229 Test: blockdev comparev and writev ...[2024-12-06 18:08:02.798042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.798069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.798080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.798087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.798540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.798561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.798567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.799079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.799088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.799099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.799109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.799581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.799589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.799599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:15.229 [2024-12-06 18:08:02.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:15.229 passed 00:31:15.229 Test: blockdev nvme passthru rw ...passed 00:31:15.229 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:08:02.882770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:15.229 [2024-12-06 18:08:02.882781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.883096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:15.229 [2024-12-06 18:08:02.883112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.883464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:15.229 [2024-12-06 18:08:02.883473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:15.229 [2024-12-06 18:08:02.883826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:15.229 [2024-12-06 18:08:02.883834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:15.229 passed 00:31:15.229 Test: blockdev nvme admin passthru ...passed 00:31:15.229 Test: blockdev copy ...passed 00:31:15.229 00:31:15.229 Run Summary: Type Total Ran Passed Failed Inactive 00:31:15.229 suites 1 1 n/a 0 0 00:31:15.229 tests 23 23 23 0 0 00:31:15.229 asserts 152 152 152 0 n/a 00:31:15.229 00:31:15.229 Elapsed time = 1.080 seconds 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.229 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.229 rmmod nvme_tcp 00:31:15.488 rmmod nvme_fabrics 00:31:15.488 rmmod nvme_keyring 00:31:15.488 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.488 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:15.488 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3311652 ']' 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3311652 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3311652 ']' 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3311652 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3311652 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3311652' 00:31:15.489 killing process with pid 3311652 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3311652 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3311652 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.489 18:08:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.024 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:18.024 00:31:18.024 real 0m9.129s 00:31:18.024 user 0m8.325s 00:31:18.024 sys 0m4.704s 00:31:18.024 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.024 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:18.024 ************************************ 00:31:18.024 END TEST nvmf_bdevio 00:31:18.024 ************************************ 00:31:18.024 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:18.024 00:31:18.024 real 4m22.839s 00:31:18.024 user 9m44.721s 00:31:18.024 sys 1m36.771s 00:31:18.024 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:18.024 18:08:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:18.024 ************************************ 00:31:18.024 END TEST nvmf_target_core_interrupt_mode 00:31:18.024 ************************************ 00:31:18.024 18:08:05 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:18.024 18:08:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:18.024 18:08:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:18.024 18:08:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:18.024 ************************************ 00:31:18.024 START TEST nvmf_interrupt 00:31:18.024 ************************************ 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:18.024 * Looking for test storage... 00:31:18.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.024 --rc genhtml_branch_coverage=1 00:31:18.024 --rc genhtml_function_coverage=1 00:31:18.024 --rc genhtml_legend=1 00:31:18.024 --rc geninfo_all_blocks=1 00:31:18.024 --rc geninfo_unexecuted_blocks=1 00:31:18.024 00:31:18.024 ' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.024 --rc genhtml_branch_coverage=1 00:31:18.024 --rc genhtml_function_coverage=1 00:31:18.024 --rc genhtml_legend=1 00:31:18.024 --rc geninfo_all_blocks=1 00:31:18.024 --rc geninfo_unexecuted_blocks=1 00:31:18.024 00:31:18.024 ' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.024 --rc genhtml_branch_coverage=1 00:31:18.024 --rc genhtml_function_coverage=1 00:31:18.024 --rc genhtml_legend=1 00:31:18.024 --rc geninfo_all_blocks=1 00:31:18.024 --rc geninfo_unexecuted_blocks=1 00:31:18.024 00:31:18.024 ' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:18.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.024 --rc genhtml_branch_coverage=1 00:31:18.024 --rc genhtml_function_coverage=1 00:31:18.024 --rc genhtml_legend=1 00:31:18.024 --rc geninfo_all_blocks=1 00:31:18.024 --rc geninfo_unexecuted_blocks=1 00:31:18.024 00:31:18.024 ' 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:18.024 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:18.025 18:08:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:23.297 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:23.297 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:23.297 Found net devices under 0000:31:00.0: cvl_0_0 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:23.297 Found net devices under 0000:31:00.1: cvl_0_1 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.297 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:31:23.298 00:31:23.298 --- 10.0.0.2 ping statistics --- 00:31:23.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.298 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:31:23.298 00:31:23.298 --- 10.0.0.1 ping statistics --- 00:31:23.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.298 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3316337 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3316337 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3316337 ']' 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:23.298 18:08:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:23.298 [2024-12-06 18:08:10.939093] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:23.298 [2024-12-06 18:08:10.940098] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:31:23.298 [2024-12-06 18:08:10.940151] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.298 [2024-12-06 18:08:11.023874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:23.298 [2024-12-06 18:08:11.059692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.298 [2024-12-06 18:08:11.059724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.298 [2024-12-06 18:08:11.059732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.298 [2024-12-06 18:08:11.059739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.298 [2024-12-06 18:08:11.059745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.298 [2024-12-06 18:08:11.060880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.298 [2024-12-06 18:08:11.060885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.298 [2024-12-06 18:08:11.117094] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.298 [2024-12-06 18:08:11.117744] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:23.298 [2024-12-06 18:08:11.117851] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:24.236 5000+0 records in 00:31:24.236 5000+0 records out 00:31:24.236 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00732643 s, 1.4 GB/s 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:24.236 AIO0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:24.236 [2024-12-06 18:08:11.785481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:24.236 [2024-12-06 18:08:11.809751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3316337 0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3316337 0 idle 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316337 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.23 reactor_0' 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316337 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.23 reactor_0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3316337 1 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3316337 1 idle 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:24.236 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:24.237 18:08:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316380 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316380 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3316523 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3316337 0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3316337 0 busy 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316337 root 20 0 128.2g 43776 32256 R 20.0 0.0 0:00.26 reactor_0' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316337 root 20 0 128.2g 43776 32256 R 20.0 0.0 0:00.26 reactor_0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=20.0 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=20 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:24.544 18:08:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316337 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.61 reactor_0' 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316337 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:02.61 reactor_0 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3316337 1 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3316337 1 busy 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316380 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:01.35 reactor_1' 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316380 root 20 0 128.2g 43776 32256 R 99.9 0.0 0:01.35 reactor_1 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:25.922 18:08:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3316523 00:31:35.917 Initializing NVMe Controllers 00:31:35.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:35.917 Controller IO queue size 256, less than required. 00:31:35.917 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:35.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:35.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:35.918 Initialization complete. Launching workers. 00:31:35.918 ======================================================== 00:31:35.918 Latency(us) 00:31:35.918 Device Information : IOPS MiB/s Average min max 00:31:35.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20807.54 81.28 12307.91 3544.72 20691.79 00:31:35.918 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 22094.32 86.31 11590.57 3539.78 20573.27 00:31:35.918 ======================================================== 00:31:35.918 Total : 42901.86 167.59 11938.48 3539.78 20691.79 00:31:35.918 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3316337 0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3316337 0 idle 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316337 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.22 reactor_0' 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316337 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:20.22 reactor_0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3316337 1 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3316337 1 idle 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316380 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1' 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316380 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:10.00 reactor_1 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:35.918 18:08:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:35.918 18:08:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:35.918 18:08:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:31:35.918 18:08:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:35.918 18:08:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:35.918 18:08:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3316337 0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3316337 0 idle 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316337 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.38 reactor_0' 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316337 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:20.38 reactor_0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3316337 1 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3316337 1 idle 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3316337 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3316337 -w 256 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3316380 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.06 reactor_1' 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3316380 root 20 0 128.2g 78336 32256 S 0.0 0.1 0:10.06 reactor_1 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:37.822 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:38.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.079 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.080 rmmod nvme_tcp 00:31:38.080 rmmod nvme_fabrics 00:31:38.080 rmmod nvme_keyring 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3316337 ']' 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3316337 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3316337 ']' 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3316337 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3316337 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3316337' 00:31:38.080 killing process with pid 3316337 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3316337 00:31:38.080 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3316337 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:38.338 18:08:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.237 18:08:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:40.237 00:31:40.237 real 0m22.574s 00:31:40.237 user 0m39.627s 00:31:40.237 sys 0m7.394s 00:31:40.237 18:08:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.237 18:08:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 ************************************ 00:31:40.237 END TEST nvmf_interrupt 00:31:40.237 ************************************ 00:31:40.237 00:31:40.237 real 26m23.241s 00:31:40.237 user 56m53.798s 00:31:40.237 sys 8m2.550s 00:31:40.237 18:08:27 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.237 18:08:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 ************************************ 00:31:40.237 END TEST nvmf_tcp 00:31:40.237 ************************************ 00:31:40.237 18:08:28 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:31:40.237 18:08:28 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:40.237 18:08:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.237 18:08:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.237 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:40.237 ************************************ 00:31:40.237 START TEST spdkcli_nvmf_tcp 00:31:40.237 ************************************ 00:31:40.237 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:40.496 * Looking for test storage... 00:31:40.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.496 --rc genhtml_branch_coverage=1 00:31:40.496 --rc genhtml_function_coverage=1 00:31:40.496 --rc genhtml_legend=1 00:31:40.496 --rc geninfo_all_blocks=1 00:31:40.496 --rc geninfo_unexecuted_blocks=1 00:31:40.496 00:31:40.496 ' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.496 --rc genhtml_branch_coverage=1 00:31:40.496 --rc genhtml_function_coverage=1 00:31:40.496 --rc genhtml_legend=1 00:31:40.496 --rc geninfo_all_blocks=1 00:31:40.496 --rc geninfo_unexecuted_blocks=1 00:31:40.496 00:31:40.496 ' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.496 --rc genhtml_branch_coverage=1 00:31:40.496 --rc genhtml_function_coverage=1 00:31:40.496 --rc genhtml_legend=1 00:31:40.496 --rc geninfo_all_blocks=1 00:31:40.496 --rc geninfo_unexecuted_blocks=1 00:31:40.496 00:31:40.496 ' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.496 --rc genhtml_branch_coverage=1 00:31:40.496 --rc genhtml_function_coverage=1 00:31:40.496 --rc genhtml_legend=1 00:31:40.496 --rc geninfo_all_blocks=1 00:31:40.496 --rc geninfo_unexecuted_blocks=1 00:31:40.496 00:31:40.496 ' 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.496 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:40.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3320240 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3320240 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3320240 ']' 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.497 18:08:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:40.497 [2024-12-06 18:08:28.216267] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:31:40.497 [2024-12-06 18:08:28.216331] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320240 ] 00:31:40.497 [2024-12-06 18:08:28.282592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:40.497 [2024-12-06 18:08:28.313875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.497 [2024-12-06 18:08:28.313874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:40.756 18:08:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:40.756 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:40.756 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:40.756 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:40.756 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:40.756 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:40.756 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:40.756 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:40.756 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:40.756 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:40.756 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:40.756 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:40.756 ' 00:31:43.288 [2024-12-06 18:08:30.814273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.299 [2024-12-06 18:08:32.050011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:46.897 [2024-12-06 18:08:34.336344] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:48.810 [2024-12-06 18:08:36.309891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:50.205 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:50.205 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:50.205 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:50.205 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:50.205 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:50.205 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:50.205 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:50.205 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:50.205 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:50.205 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:50.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:50.206 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:50.206 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:50.206 18:08:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:50.775 18:08:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:50.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:50.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:50.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:50.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:50.775 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:50.775 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:50.775 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:50.775 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:50.775 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:50.775 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:50.775 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:50.775 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:50.775 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:50.775 ' 00:31:56.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:56.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:56.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:56.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:56.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:56.049 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:56.049 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:56.049 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:56.049 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:56.049 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:56.049 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:56.049 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:56.049 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:56.049 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3320240 ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3320240' 00:31:56.049 killing process with pid 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3320240 ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3320240 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3320240 ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3320240 00:31:56.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3320240) - No such process 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3320240 is not found' 00:31:56.049 Process with pid 3320240 is not found 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:56.049 00:31:56.049 real 0m15.602s 00:31:56.049 user 0m33.272s 00:31:56.049 sys 0m0.554s 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.049 18:08:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:56.049 ************************************ 00:31:56.049 END TEST spdkcli_nvmf_tcp 00:31:56.049 ************************************ 00:31:56.049 18:08:43 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:56.049 18:08:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:56.049 18:08:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.049 18:08:43 -- common/autotest_common.sh@10 -- # set +x 00:31:56.049 ************************************ 00:31:56.049 START TEST nvmf_identify_passthru 00:31:56.049 ************************************ 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:56.049 * Looking for test storage... 00:31:56.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:56.049 18:08:43 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:56.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.049 --rc genhtml_branch_coverage=1 00:31:56.049 --rc genhtml_function_coverage=1 00:31:56.049 --rc genhtml_legend=1 00:31:56.049 --rc geninfo_all_blocks=1 00:31:56.049 --rc geninfo_unexecuted_blocks=1 00:31:56.049 00:31:56.049 ' 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:56.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.049 --rc genhtml_branch_coverage=1 00:31:56.049 --rc genhtml_function_coverage=1 00:31:56.049 --rc genhtml_legend=1 00:31:56.049 --rc geninfo_all_blocks=1 00:31:56.049 --rc geninfo_unexecuted_blocks=1 00:31:56.049 00:31:56.049 ' 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:56.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.049 --rc genhtml_branch_coverage=1 00:31:56.049 --rc genhtml_function_coverage=1 00:31:56.049 --rc genhtml_legend=1 00:31:56.049 --rc geninfo_all_blocks=1 00:31:56.049 --rc geninfo_unexecuted_blocks=1 00:31:56.049 00:31:56.049 ' 00:31:56.049 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:56.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.049 --rc genhtml_branch_coverage=1 00:31:56.049 --rc genhtml_function_coverage=1 00:31:56.049 --rc genhtml_legend=1 00:31:56.049 --rc geninfo_all_blocks=1 00:31:56.049 --rc geninfo_unexecuted_blocks=1 00:31:56.049 00:31:56.049 ' 00:31:56.049 18:08:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.049 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:56.049 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.049 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:56.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:56.050 18:08:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.050 18:08:43 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:56.050 18:08:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.050 18:08:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.050 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:56.050 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:56.050 18:08:43 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:56.050 18:08:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:01.329 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:01.330 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:01.330 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:01.330 Found net devices under 0000:31:00.0: cvl_0_0 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:01.330 Found net devices under 0000:31:00.1: cvl_0_1 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.330 18:08:48 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:01.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:32:01.330 00:32:01.330 --- 10.0.0.2 ping statistics --- 00:32:01.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.330 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:32:01.330 00:32:01.330 --- 10.0.0.1 ping statistics --- 00:32:01.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.330 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:01.330 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.590 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:01.590 18:08:49 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:01.590 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:01.590 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:01.590 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:01.591 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:01.591 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:01.591 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:32:01.591 18:08:49 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:32:01.591 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:32:01.591 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:32:01.591 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:01.591 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:01.591 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:02.160 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:32:02.160 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:32:02.160 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:02.160 18:08:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3327660 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:02.420 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3327660 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3327660 ']' 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:02.420 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.420 [2024-12-06 18:08:50.238944] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:32:02.420 [2024-12-06 18:08:50.238996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.680 [2024-12-06 18:08:50.309475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:02.680 [2024-12-06 18:08:50.339900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.680 [2024-12-06 18:08:50.339929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.680 [2024-12-06 18:08:50.339935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.680 [2024-12-06 18:08:50.339939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.680 [2024-12-06 18:08:50.339944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.680 [2024-12-06 18:08:50.341321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.680 [2024-12-06 18:08:50.341481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:02.680 [2024-12-06 18:08:50.341595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.680 [2024-12-06 18:08:50.341597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:02.680 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.680 INFO: Log level set to 20 00:32:02.680 INFO: Requests: 00:32:02.680 { 00:32:02.680 "jsonrpc": "2.0", 00:32:02.680 "method": "nvmf_set_config", 00:32:02.680 "id": 1, 00:32:02.680 "params": { 00:32:02.680 "admin_cmd_passthru": { 00:32:02.680 "identify_ctrlr": true 00:32:02.680 } 00:32:02.680 } 00:32:02.680 } 00:32:02.680 00:32:02.680 INFO: response: 00:32:02.680 { 00:32:02.680 "jsonrpc": "2.0", 00:32:02.680 "id": 1, 00:32:02.680 "result": true 00:32:02.680 } 00:32:02.680 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.680 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.680 INFO: Setting log level to 20 00:32:02.680 INFO: Setting log level to 20 00:32:02.680 INFO: Log level set to 20 00:32:02.680 INFO: Log level set to 20 00:32:02.680 INFO: Requests: 00:32:02.680 { 00:32:02.680 "jsonrpc": "2.0", 00:32:02.680 "method": "framework_start_init", 00:32:02.680 "id": 1 00:32:02.680 } 00:32:02.680 00:32:02.680 INFO: Requests: 00:32:02.680 { 00:32:02.680 "jsonrpc": "2.0", 00:32:02.680 "method": "framework_start_init", 00:32:02.680 "id": 1 00:32:02.680 } 00:32:02.680 00:32:02.680 [2024-12-06 18:08:50.424752] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:02.680 INFO: response: 00:32:02.680 { 00:32:02.680 "jsonrpc": "2.0", 00:32:02.680 "id": 1, 00:32:02.680 "result": true 00:32:02.680 } 00:32:02.680 00:32:02.680 INFO: response: 00:32:02.680 { 00:32:02.680 "jsonrpc": "2.0", 00:32:02.680 "id": 1, 00:32:02.680 "result": true 00:32:02.680 } 00:32:02.680 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.680 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.680 INFO: Setting log level to 40 00:32:02.680 INFO: Setting log level to 40 00:32:02.680 INFO: Setting log level to 40 00:32:02.680 [2024-12-06 18:08:50.433760] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.680 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:02.680 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.680 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.248 Nvme0n1 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.248 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.248 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.248 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.248 [2024-12-06 18:08:50.791161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.248 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.248 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.248 [ 00:32:03.248 { 00:32:03.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:03.248 "subtype": "Discovery", 00:32:03.248 "listen_addresses": [], 00:32:03.248 "allow_any_host": true, 00:32:03.248 "hosts": [] 00:32:03.248 }, 00:32:03.249 { 00:32:03.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.249 "subtype": "NVMe", 00:32:03.249 "listen_addresses": [ 00:32:03.249 { 00:32:03.249 "trtype": "TCP", 00:32:03.249 "adrfam": "IPv4", 00:32:03.249 "traddr": "10.0.0.2", 00:32:03.249 "trsvcid": "4420" 00:32:03.249 } 00:32:03.249 ], 00:32:03.249 "allow_any_host": true, 00:32:03.249 "hosts": [], 00:32:03.249 "serial_number": "SPDK00000000000001", 00:32:03.249 "model_number": "SPDK bdev Controller", 00:32:03.249 "max_namespaces": 1, 00:32:03.249 "min_cntlid": 1, 00:32:03.249 "max_cntlid": 65519, 00:32:03.249 "namespaces": [ 00:32:03.249 { 00:32:03.249 "nsid": 1, 00:32:03.249 "bdev_name": "Nvme0n1", 00:32:03.249 "name": "Nvme0n1", 00:32:03.249 "nguid": "363447305260549900253845000000A3", 00:32:03.249 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:32:03.249 } 00:32:03.249 ] 00:32:03.249 } 00:32:03.249 ] 00:32:03.249 18:08:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.249 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:03.249 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:03.249 18:08:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:03.249 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:32:03.249 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:03.249 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:03.249 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:03.507 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:32:03.507 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.507 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:03.507 18:08:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.507 rmmod nvme_tcp 00:32:03.507 rmmod nvme_fabrics 00:32:03.507 rmmod nvme_keyring 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3327660 ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3327660 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3327660 ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3327660 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3327660 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3327660' 00:32:03.507 killing process with pid 3327660 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3327660 00:32:03.507 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3327660 00:32:03.767 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:03.767 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.768 18:08:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.768 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:03.768 18:08:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.300 18:08:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.300 00:32:06.300 real 0m9.825s 00:32:06.300 user 0m5.836s 00:32:06.300 sys 0m4.783s 00:32:06.300 18:08:53 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.300 18:08:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:06.301 ************************************ 00:32:06.301 END TEST nvmf_identify_passthru 00:32:06.301 ************************************ 00:32:06.301 18:08:53 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:06.301 18:08:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:06.301 18:08:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.301 18:08:53 -- common/autotest_common.sh@10 -- # set +x 00:32:06.301 ************************************ 00:32:06.301 START TEST nvmf_dif 00:32:06.301 ************************************ 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:06.301 * Looking for test storage... 00:32:06.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:06.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.301 --rc genhtml_branch_coverage=1 00:32:06.301 --rc genhtml_function_coverage=1 00:32:06.301 --rc genhtml_legend=1 00:32:06.301 --rc geninfo_all_blocks=1 00:32:06.301 --rc geninfo_unexecuted_blocks=1 00:32:06.301 00:32:06.301 ' 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:06.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.301 --rc genhtml_branch_coverage=1 00:32:06.301 --rc genhtml_function_coverage=1 00:32:06.301 --rc genhtml_legend=1 00:32:06.301 --rc geninfo_all_blocks=1 00:32:06.301 --rc geninfo_unexecuted_blocks=1 00:32:06.301 00:32:06.301 ' 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:06.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.301 --rc genhtml_branch_coverage=1 00:32:06.301 --rc genhtml_function_coverage=1 00:32:06.301 --rc genhtml_legend=1 00:32:06.301 --rc geninfo_all_blocks=1 00:32:06.301 --rc geninfo_unexecuted_blocks=1 00:32:06.301 00:32:06.301 ' 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:06.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.301 --rc genhtml_branch_coverage=1 00:32:06.301 --rc genhtml_function_coverage=1 00:32:06.301 --rc genhtml_legend=1 00:32:06.301 --rc geninfo_all_blocks=1 00:32:06.301 --rc geninfo_unexecuted_blocks=1 00:32:06.301 00:32:06.301 ' 00:32:06.301 18:08:53 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.301 18:08:53 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.301 18:08:53 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.301 18:08:53 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.301 18:08:53 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.301 18:08:53 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:06.301 18:08:53 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:06.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.301 18:08:53 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:06.301 18:08:53 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:06.301 18:08:53 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:06.301 18:08:53 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:06.301 18:08:53 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.301 18:08:53 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.301 18:08:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:11.579 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:11.579 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:11.579 Found net devices under 0000:31:00.0: cvl_0_0 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:11.579 Found net devices under 0000:31:00.1: cvl_0_1 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.579 18:08:58 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:11.579 18:08:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:11.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:32:11.579 00:32:11.579 --- 10.0.0.2 ping statistics --- 00:32:11.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.579 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:32:11.579 18:08:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:11.579 00:32:11.579 --- 10.0.0.1 ping statistics --- 00:32:11.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.579 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:11.579 18:08:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.579 18:08:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:11.579 18:08:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:11.579 18:08:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:13.485 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:13.485 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:13.485 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:13.745 18:09:01 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.745 18:09:01 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:13.745 18:09:01 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:13.745 18:09:01 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.745 18:09:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:13.745 18:09:01 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:14.004 18:09:01 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:14.004 18:09:01 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:14.004 18:09:01 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:14.004 18:09:01 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3333786 00:32:14.004 18:09:01 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3333786 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3333786 ']' 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.004 18:09:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:14.004 18:09:01 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:14.004 [2024-12-06 18:09:01.636646] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:32:14.004 [2024-12-06 18:09:01.636694] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.004 [2024-12-06 18:09:01.721718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.004 [2024-12-06 18:09:01.757683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.004 [2024-12-06 18:09:01.757719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.004 [2024-12-06 18:09:01.757729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.004 [2024-12-06 18:09:01.757736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.004 [2024-12-06 18:09:01.757742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.004 [2024-12-06 18:09:01.758355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:32:14.963 18:09:02 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 18:09:02 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.963 18:09:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:14.963 18:09:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 [2024-12-06 18:09:02.440291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.963 18:09:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 ************************************ 00:32:14.963 START TEST fio_dif_1_default 00:32:14.963 ************************************ 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 bdev_null0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:14.963 [2024-12-06 18:09:02.500600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:14.963 { 00:32:14.963 "params": { 00:32:14.963 "name": "Nvme$subsystem", 00:32:14.963 "trtype": "$TEST_TRANSPORT", 00:32:14.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.963 "adrfam": "ipv4", 00:32:14.963 "trsvcid": "$NVMF_PORT", 00:32:14.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.963 "hdgst": ${hdgst:-false}, 00:32:14.963 "ddgst": ${ddgst:-false} 00:32:14.963 }, 00:32:14.963 "method": "bdev_nvme_attach_controller" 00:32:14.963 } 00:32:14.963 EOF 00:32:14.963 )") 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:14.963 "params": { 00:32:14.963 "name": "Nvme0", 00:32:14.963 "trtype": "tcp", 00:32:14.963 "traddr": "10.0.0.2", 00:32:14.963 "adrfam": "ipv4", 00:32:14.963 "trsvcid": "4420", 00:32:14.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.963 "hdgst": false, 00:32:14.963 "ddgst": false 00:32:14.963 }, 00:32:14.963 "method": "bdev_nvme_attach_controller" 00:32:14.963 }' 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:14.963 18:09:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.244 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:15.244 fio-3.35 00:32:15.244 Starting 1 thread 00:32:27.481 00:32:27.481 filename0: (groupid=0, jobs=1): err= 0: pid=3334465: Fri Dec 6 18:09:13 2024 00:32:27.481 read: IOPS=607, BW=2431KiB/s (2489kB/s)(23.8MiB/10037msec) 00:32:27.481 slat (nsec): min=4741, max=35416, avg=7179.85, stdev=1445.32 00:32:27.481 clat (usec): min=346, max=42812, avg=6562.14, stdev=14235.66 00:32:27.481 lat (usec): min=351, max=42820, avg=6569.32, stdev=14235.06 00:32:27.481 clat percentiles (usec): 00:32:27.481 | 1.00th=[ 486], 5.00th=[ 611], 10.00th=[ 644], 20.00th=[ 668], 00:32:27.481 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 783], 00:32:27.481 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[41157], 95.00th=[41157], 00:32:27.481 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:27.481 | 99.99th=[42730] 00:32:27.481 bw ( KiB/s): min= 704, max=17632, per=100.00%, avg=2438.40, stdev=4764.26, samples=20 00:32:27.481 iops : min= 176, max= 4408, avg=609.60, stdev=1191.06, samples=20 00:32:27.481 lat (usec) : 500=1.25%, 750=38.57%, 1000=45.56% 00:32:27.481 lat (msec) : 2=0.20%, 4=0.07%, 50=14.36% 00:32:27.481 cpu : usr=93.27%, sys=6.49%, ctx=14, majf=0, minf=192 00:32:27.481 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.481 issued rwts: total=6100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.481 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:27.481 00:32:27.481 Run status group 0 (all jobs): 00:32:27.481 READ: bw=2431KiB/s (2489kB/s), 2431KiB/s-2431KiB/s (2489kB/s-2489kB/s), io=23.8MiB (25.0MB), run=10037-10037msec 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.481 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 00:32:27.482 real 0m11.257s 00:32:27.482 user 0m20.614s 00:32:27.482 sys 0m1.004s 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 ************************************ 00:32:27.482 END TEST fio_dif_1_default 00:32:27.482 ************************************ 00:32:27.482 18:09:13 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:27.482 18:09:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:27.482 18:09:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 ************************************ 00:32:27.482 START TEST fio_dif_1_multi_subsystems 00:32:27.482 ************************************ 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 bdev_null0 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 [2024-12-06 18:09:13.801810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 bdev_null1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.482 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:27.483 { 00:32:27.483 "params": { 00:32:27.483 "name": "Nvme$subsystem", 00:32:27.483 "trtype": "$TEST_TRANSPORT", 00:32:27.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.483 "adrfam": "ipv4", 00:32:27.483 "trsvcid": "$NVMF_PORT", 00:32:27.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.483 "hdgst": ${hdgst:-false}, 00:32:27.483 "ddgst": ${ddgst:-false} 00:32:27.483 }, 00:32:27.483 "method": "bdev_nvme_attach_controller" 00:32:27.483 } 00:32:27.483 EOF 00:32:27.483 )") 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:27.483 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:27.484 { 00:32:27.484 "params": { 00:32:27.484 "name": "Nvme$subsystem", 00:32:27.484 "trtype": "$TEST_TRANSPORT", 00:32:27.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.484 "adrfam": "ipv4", 00:32:27.484 "trsvcid": "$NVMF_PORT", 00:32:27.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.484 "hdgst": ${hdgst:-false}, 00:32:27.484 "ddgst": ${ddgst:-false} 00:32:27.484 }, 00:32:27.484 "method": "bdev_nvme_attach_controller" 00:32:27.484 } 00:32:27.484 EOF 00:32:27.484 )") 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:27.484 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:27.485 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:27.485 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:27.485 "params": { 00:32:27.485 "name": "Nvme0", 00:32:27.485 "trtype": "tcp", 00:32:27.485 "traddr": "10.0.0.2", 00:32:27.485 "adrfam": "ipv4", 00:32:27.485 "trsvcid": "4420", 00:32:27.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.485 "hdgst": false, 00:32:27.485 "ddgst": false 00:32:27.485 }, 00:32:27.485 "method": "bdev_nvme_attach_controller" 00:32:27.485 },{ 00:32:27.485 "params": { 00:32:27.485 "name": "Nvme1", 00:32:27.485 "trtype": "tcp", 00:32:27.485 "traddr": "10.0.0.2", 00:32:27.485 "adrfam": "ipv4", 00:32:27.485 "trsvcid": "4420", 00:32:27.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:27.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:27.485 "hdgst": false, 00:32:27.485 "ddgst": false 00:32:27.485 }, 00:32:27.485 "method": "bdev_nvme_attach_controller" 00:32:27.485 }' 00:32:27.485 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:27.485 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:27.485 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.485 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:27.486 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.486 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:27.486 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:27.486 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:27.486 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:27.486 18:09:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.486 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:27.486 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:27.486 fio-3.35 00:32:27.486 Starting 2 threads 00:32:37.477 00:32:37.477 filename0: (groupid=0, jobs=1): err= 0: pid=3337550: Fri Dec 6 18:09:24 2024 00:32:37.477 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10024msec) 00:32:37.477 slat (nsec): min=4322, max=18603, avg=5887.44, stdev=784.66 00:32:37.477 clat (usec): min=850, max=42955, avg=40894.71, stdev=2579.45 00:32:37.477 lat (usec): min=856, max=42961, avg=40900.60, stdev=2579.47 00:32:37.477 clat percentiles (usec): 00:32:37.477 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:37.477 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:37.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:32:37.477 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:32:37.477 | 99.99th=[42730] 00:32:37.477 bw ( KiB/s): min= 384, max= 416, per=34.04%, avg=390.40, stdev=13.13, samples=20 00:32:37.477 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:32:37.477 lat (usec) : 1000=0.41% 00:32:37.477 lat (msec) : 50=99.59% 00:32:37.477 cpu : usr=95.96%, sys=3.85%, ctx=13, majf=0, minf=44 00:32:37.477 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.477 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.477 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:37.477 filename1: (groupid=0, jobs=1): err= 0: pid=3337551: Fri Dec 6 18:09:24 2024 00:32:37.477 read: IOPS=188, BW=755KiB/s (773kB/s)(7584KiB/10042msec) 00:32:37.477 slat (nsec): min=4263, max=17968, avg=5978.55, stdev=913.72 00:32:37.477 clat (usec): min=563, max=43109, avg=21167.67, stdev=20153.13 00:32:37.477 lat (usec): min=569, max=43122, avg=21173.65, stdev=20153.09 00:32:37.477 clat percentiles (usec): 00:32:37.477 | 1.00th=[ 635], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 840], 00:32:37.477 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:32:37.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:37.477 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:32:37.477 | 99.99th=[43254] 00:32:37.477 bw ( KiB/s): min= 672, max= 768, per=65.99%, avg=756.80, stdev=28.00, samples=20 00:32:37.477 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:32:37.477 lat (usec) : 750=1.95%, 1000=45.36% 00:32:37.477 lat (msec) : 2=2.27%, 50=50.42% 00:32:37.477 cpu : usr=95.23%, sys=4.58%, ctx=9, majf=0, minf=163 00:32:37.477 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:37.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:37.477 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:37.477 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:37.477 00:32:37.477 Run status group 0 (all jobs): 00:32:37.477 READ: bw=1146KiB/s (1173kB/s), 391KiB/s-755KiB/s (400kB/s-773kB/s), io=11.2MiB (11.8MB), run=10024-10042msec 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 00:32:37.477 real 0m11.292s 00:32:37.477 user 0m33.585s 00:32:37.477 sys 0m1.149s 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 ************************************ 00:32:37.477 END TEST fio_dif_1_multi_subsystems 00:32:37.477 ************************************ 00:32:37.477 18:09:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:37.477 18:09:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:37.477 18:09:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 ************************************ 00:32:37.477 START TEST fio_dif_rand_params 00:32:37.477 ************************************ 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 bdev_null0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:37.477 [2024-12-06 18:09:25.141350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:37.477 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:37.478 { 00:32:37.478 "params": { 00:32:37.478 "name": "Nvme$subsystem", 00:32:37.478 "trtype": "$TEST_TRANSPORT", 00:32:37.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.478 "adrfam": "ipv4", 00:32:37.478 "trsvcid": "$NVMF_PORT", 00:32:37.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.478 "hdgst": ${hdgst:-false}, 00:32:37.478 "ddgst": ${ddgst:-false} 00:32:37.478 }, 00:32:37.478 "method": "bdev_nvme_attach_controller" 00:32:37.478 } 00:32:37.478 EOF 00:32:37.478 )") 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:37.478 "params": { 00:32:37.478 "name": "Nvme0", 00:32:37.478 "trtype": "tcp", 00:32:37.478 "traddr": "10.0.0.2", 00:32:37.478 "adrfam": "ipv4", 00:32:37.478 "trsvcid": "4420", 00:32:37.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.478 "hdgst": false, 00:32:37.478 "ddgst": false 00:32:37.478 }, 00:32:37.478 "method": "bdev_nvme_attach_controller" 00:32:37.478 }' 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:37.478 18:09:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:37.736 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:37.736 ... 00:32:37.736 fio-3.35 00:32:37.736 Starting 3 threads 00:32:44.305 00:32:44.305 filename0: (groupid=0, jobs=1): err= 0: pid=3340064: Fri Dec 6 18:09:31 2024 00:32:44.305 read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(191MiB/5007msec) 00:32:44.305 slat (nsec): min=4182, max=20694, avg=6338.59, stdev=946.29 00:32:44.305 clat (usec): min=5407, max=90550, avg=9807.96, stdev=5340.84 00:32:44.305 lat (usec): min=5414, max=90556, avg=9814.30, stdev=5340.92 00:32:44.305 clat percentiles (usec): 00:32:44.305 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 8094], 00:32:44.305 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:32:44.305 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:32:44.305 | 99.00th=[48497], 99.50th=[50070], 99.90th=[52691], 99.95th=[90702], 00:32:44.305 | 99.99th=[90702] 00:32:44.305 bw ( KiB/s): min=30720, max=43008, per=33.45%, avg=39091.20, stdev=3537.06, samples=10 00:32:44.305 iops : min= 240, max= 336, avg=305.40, stdev=27.63, samples=10 00:32:44.305 lat (msec) : 10=72.22%, 20=26.27%, 50=0.98%, 100=0.52% 00:32:44.305 cpu : usr=96.66%, sys=3.12%, ctx=6, majf=0, minf=136 00:32:44.305 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.305 issued rwts: total=1530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.305 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.305 filename0: (groupid=0, jobs=1): err= 0: pid=3340065: Fri Dec 6 18:09:31 2024 00:32:44.305 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(201MiB/5044msec) 00:32:44.305 slat (nsec): min=2981, max=20699, avg=6384.43, stdev=880.23 00:32:44.305 clat (usec): min=4118, max=50637, avg=9393.29, stdev=3132.28 00:32:44.305 lat (usec): min=4124, max=50643, avg=9399.68, stdev=3132.29 00:32:44.305 clat percentiles (usec): 00:32:44.305 | 1.00th=[ 5604], 5.00th=[ 6783], 10.00th=[ 7439], 20.00th=[ 8029], 00:32:44.305 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:32:44.305 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10814], 95.00th=[11207], 00:32:44.305 | 99.00th=[12125], 99.50th=[13173], 99.90th=[50594], 99.95th=[50594], 00:32:44.305 | 99.99th=[50594] 00:32:44.305 bw ( KiB/s): min=37376, max=45056, per=35.12%, avg=41036.80, stdev=2545.88, samples=10 00:32:44.305 iops : min= 292, max= 352, avg=320.60, stdev=19.89, samples=10 00:32:44.305 lat (msec) : 10=70.16%, 20=29.35%, 50=0.25%, 100=0.25% 00:32:44.305 cpu : usr=95.60%, sys=4.18%, ctx=7, majf=0, minf=126 00:32:44.305 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.305 issued rwts: total=1605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.305 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.305 filename0: (groupid=0, jobs=1): err= 0: pid=3340066: Fri Dec 6 18:09:31 2024 00:32:44.305 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(184MiB/5003msec) 00:32:44.305 slat (nsec): min=4217, max=21053, avg=6322.61, stdev=974.80 00:32:44.305 clat (usec): min=4600, max=89411, avg=10201.05, stdev=7270.03 00:32:44.305 lat (usec): min=4606, max=89418, avg=10207.37, stdev=7270.04 00:32:44.305 clat percentiles (usec): 00:32:44.305 | 1.00th=[ 5407], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7504], 00:32:44.305 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9765], 00:32:44.305 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11076], 95.00th=[11600], 00:32:44.305 | 99.00th=[48497], 99.50th=[49021], 99.90th=[88605], 99.95th=[89654], 00:32:44.305 | 99.99th=[89654] 00:32:44.305 bw ( KiB/s): min=31744, max=41216, per=33.10%, avg=38684.44, stdev=2967.40, samples=9 00:32:44.305 iops : min= 248, max= 322, avg=302.22, stdev=23.18, samples=9 00:32:44.305 lat (msec) : 10=65.99%, 20=31.09%, 50=2.79%, 100=0.14% 00:32:44.305 cpu : usr=96.52%, sys=3.24%, ctx=9, majf=0, minf=152 00:32:44.305 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.305 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.305 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.305 issued rwts: total=1470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.305 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.305 00:32:44.305 Run status group 0 (all jobs): 00:32:44.305 READ: bw=114MiB/s (120MB/s), 36.7MiB/s-39.8MiB/s (38.5MB/s-41.7MB/s), io=576MiB (604MB), run=5003-5044msec 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.305 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 bdev_null0 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 [2024-12-06 18:09:31.265078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 bdev_null1 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 bdev_null2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:44.306 { 00:32:44.306 "params": { 00:32:44.306 "name": "Nvme$subsystem", 00:32:44.306 "trtype": "$TEST_TRANSPORT", 00:32:44.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.306 "adrfam": "ipv4", 00:32:44.306 "trsvcid": "$NVMF_PORT", 00:32:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.306 "hdgst": ${hdgst:-false}, 00:32:44.306 "ddgst": ${ddgst:-false} 00:32:44.306 }, 00:32:44.306 "method": "bdev_nvme_attach_controller" 00:32:44.306 } 00:32:44.306 EOF 00:32:44.306 )") 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:44.306 { 00:32:44.306 "params": { 00:32:44.306 "name": "Nvme$subsystem", 00:32:44.306 "trtype": "$TEST_TRANSPORT", 00:32:44.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.306 "adrfam": "ipv4", 00:32:44.306 "trsvcid": "$NVMF_PORT", 00:32:44.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.306 "hdgst": ${hdgst:-false}, 00:32:44.306 "ddgst": ${ddgst:-false} 00:32:44.306 }, 00:32:44.306 "method": "bdev_nvme_attach_controller" 00:32:44.306 } 00:32:44.306 EOF 00:32:44.306 )") 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:44.306 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:44.307 { 00:32:44.307 "params": { 00:32:44.307 "name": "Nvme$subsystem", 00:32:44.307 "trtype": "$TEST_TRANSPORT", 00:32:44.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.307 "adrfam": "ipv4", 00:32:44.307 "trsvcid": "$NVMF_PORT", 00:32:44.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.307 "hdgst": ${hdgst:-false}, 00:32:44.307 "ddgst": ${ddgst:-false} 00:32:44.307 }, 00:32:44.307 "method": "bdev_nvme_attach_controller" 00:32:44.307 } 00:32:44.307 EOF 00:32:44.307 )") 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:44.307 "params": { 00:32:44.307 "name": "Nvme0", 00:32:44.307 "trtype": "tcp", 00:32:44.307 "traddr": "10.0.0.2", 00:32:44.307 "adrfam": "ipv4", 00:32:44.307 "trsvcid": "4420", 00:32:44.307 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.307 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:44.307 "hdgst": false, 00:32:44.307 "ddgst": false 00:32:44.307 }, 00:32:44.307 "method": "bdev_nvme_attach_controller" 00:32:44.307 },{ 00:32:44.307 "params": { 00:32:44.307 "name": "Nvme1", 00:32:44.307 "trtype": "tcp", 00:32:44.307 "traddr": "10.0.0.2", 00:32:44.307 "adrfam": "ipv4", 00:32:44.307 "trsvcid": "4420", 00:32:44.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:44.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:44.307 "hdgst": false, 00:32:44.307 "ddgst": false 00:32:44.307 }, 00:32:44.307 "method": "bdev_nvme_attach_controller" 00:32:44.307 },{ 00:32:44.307 "params": { 00:32:44.307 "name": "Nvme2", 00:32:44.307 "trtype": "tcp", 00:32:44.307 "traddr": "10.0.0.2", 00:32:44.307 "adrfam": "ipv4", 00:32:44.307 "trsvcid": "4420", 00:32:44.307 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:44.307 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:44.307 "hdgst": false, 00:32:44.307 "ddgst": false 00:32:44.307 }, 00:32:44.307 "method": "bdev_nvme_attach_controller" 00:32:44.307 }' 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:44.307 18:09:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.307 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:44.307 ... 00:32:44.307 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:44.307 ... 00:32:44.307 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:44.307 ... 00:32:44.307 fio-3.35 00:32:44.307 Starting 24 threads 00:32:56.555 00:32:56.555 filename0: (groupid=0, jobs=1): err= 0: pid=3341565: Fri Dec 6 18:09:42 2024 00:32:56.555 read: IOPS=654, BW=2618KiB/s (2681kB/s)(25.6MiB/10022msec) 00:32:56.555 slat (nsec): min=2889, max=56541, avg=15359.24, stdev=9178.42 00:32:56.555 clat (usec): min=8370, max=41005, avg=24311.32, stdev=1981.01 00:32:56.555 lat (usec): min=8403, max=41024, avg=24326.68, stdev=1981.64 00:32:56.555 clat percentiles (usec): 00:32:56.555 | 1.00th=[15533], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:32:56.555 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:32:56.555 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.555 | 99.00th=[27132], 99.50th=[28967], 99.90th=[40109], 99.95th=[40109], 00:32:56.555 | 99.99th=[41157] 00:32:56.555 bw ( KiB/s): min= 2560, max= 2944, per=4.10%, avg=2617.30, stdev=96.95, samples=20 00:32:56.555 iops : min= 640, max= 736, avg=654.30, stdev=24.22, samples=20 00:32:56.555 lat (msec) : 10=0.21%, 20=2.88%, 50=96.91% 00:32:56.555 cpu : usr=97.58%, sys=1.52%, ctx=865, majf=0, minf=9 00:32:56.555 IO depths : 1=6.0%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.5%, 32=0.0%, >=64=0.0% 00:32:56.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.555 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.555 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.555 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.555 filename0: (groupid=0, jobs=1): err= 0: pid=3341566: Fri Dec 6 18:09:42 2024 00:32:56.555 read: IOPS=647, BW=2591KiB/s (2653kB/s)(25.3MiB/10005msec) 00:32:56.555 slat (nsec): min=4136, max=56537, avg=16442.75, stdev=9656.32 00:32:56.555 clat (usec): min=11667, max=38253, avg=24546.38, stdev=1148.97 00:32:56.555 lat (usec): min=11673, max=38266, avg=24562.83, stdev=1148.69 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.556 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:32:56.556 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.556 | 99.00th=[26346], 99.50th=[27132], 99.90th=[38011], 99.95th=[38011], 00:32:56.556 | 99.99th=[38011] 00:32:56.556 bw ( KiB/s): min= 2432, max= 2688, per=4.05%, avg=2586.58, stdev=67.46, samples=19 00:32:56.556 iops : min= 608, max= 672, avg=646.58, stdev=16.79, samples=19 00:32:56.556 lat (msec) : 20=0.25%, 50=99.75% 00:32:56.556 cpu : usr=99.06%, sys=0.68%, ctx=13, majf=0, minf=9 00:32:56.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename0: (groupid=0, jobs=1): err= 0: pid=3341567: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=647, BW=2590KiB/s (2652kB/s)(25.3MiB/10005msec) 00:32:56.556 slat (nsec): min=3125, max=55684, avg=14501.33, stdev=8941.89 00:32:56.556 clat (usec): min=7256, max=45958, avg=24576.21, stdev=1502.79 00:32:56.556 lat (usec): min=7262, max=45967, avg=24590.71, stdev=1502.57 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.556 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.556 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.556 | 99.00th=[26608], 99.50th=[26870], 99.90th=[45876], 99.95th=[45876], 00:32:56.556 | 99.99th=[45876] 00:32:56.556 bw ( KiB/s): min= 2432, max= 2688, per=4.05%, avg=2586.58, stdev=80.88, samples=19 00:32:56.556 iops : min= 608, max= 672, avg=646.58, stdev=20.25, samples=19 00:32:56.556 lat (msec) : 10=0.22%, 20=0.22%, 50=99.57% 00:32:56.556 cpu : usr=99.06%, sys=0.68%, ctx=13, majf=0, minf=9 00:32:56.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename0: (groupid=0, jobs=1): err= 0: pid=3341568: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=647, BW=2590KiB/s (2652kB/s)(25.3MiB/10009msec) 00:32:56.556 slat (nsec): min=4178, max=68162, avg=17918.71, stdev=11722.54 00:32:56.556 clat (usec): min=15195, max=33366, avg=24555.17, stdev=889.14 00:32:56.556 lat (usec): min=15203, max=33379, avg=24573.09, stdev=887.78 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:32:56.556 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.556 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:32:56.556 | 99.00th=[26608], 99.50th=[26870], 99.90th=[33424], 99.95th=[33424], 00:32:56.556 | 99.99th=[33424] 00:32:56.556 bw ( KiB/s): min= 2432, max= 2693, per=4.05%, avg=2587.47, stdev=80.99, samples=19 00:32:56.556 iops : min= 608, max= 673, avg=646.84, stdev=20.23, samples=19 00:32:56.556 lat (msec) : 20=0.25%, 50=99.75% 00:32:56.556 cpu : usr=99.00%, sys=0.73%, ctx=13, majf=0, minf=9 00:32:56.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename0: (groupid=0, jobs=1): err= 0: pid=3341569: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=661, BW=2647KiB/s (2711kB/s)(25.9MiB/10009msec) 00:32:56.556 slat (nsec): min=5748, max=57625, avg=9132.58, stdev=6262.24 00:32:56.556 clat (usec): min=12054, max=26900, avg=24097.56, stdev=2160.63 00:32:56.556 lat (usec): min=12060, max=26907, avg=24106.70, stdev=2160.67 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[15270], 5.00th=[17433], 10.00th=[23725], 20.00th=[23987], 00:32:56.556 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.556 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.556 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:32:56.556 | 99.99th=[26870] 00:32:56.556 bw ( KiB/s): min= 2554, max= 2816, per=4.14%, avg=2646.95, stdev=74.78, samples=19 00:32:56.556 iops : min= 638, max= 704, avg=661.68, stdev=18.72, samples=19 00:32:56.556 lat (msec) : 20=6.31%, 50=93.69% 00:32:56.556 cpu : usr=98.20%, sys=1.24%, ctx=144, majf=0, minf=9 00:32:56.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename0: (groupid=0, jobs=1): err= 0: pid=3341570: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=647, BW=2591KiB/s (2653kB/s)(25.3MiB/10005msec) 00:32:56.556 slat (nsec): min=2871, max=62924, avg=10381.30, stdev=7743.63 00:32:56.556 clat (usec): min=19771, max=27741, avg=24619.20, stdev=697.78 00:32:56.556 lat (usec): min=19785, max=27748, avg=24629.58, stdev=697.01 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.556 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:32:56.556 | 70.00th=[25035], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:32:56.556 | 99.00th=[26608], 99.50th=[26870], 99.90th=[27657], 99.95th=[27657], 00:32:56.556 | 99.99th=[27657] 00:32:56.556 bw ( KiB/s): min= 2432, max= 2688, per=4.06%, avg=2592.37, stdev=71.46, samples=19 00:32:56.556 iops : min= 608, max= 672, avg=647.95, stdev=17.85, samples=19 00:32:56.556 lat (msec) : 20=0.20%, 50=99.80% 00:32:56.556 cpu : usr=98.57%, sys=1.09%, ctx=97, majf=0, minf=9 00:32:56.556 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename0: (groupid=0, jobs=1): err= 0: pid=3341571: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=682, BW=2729KiB/s (2795kB/s)(26.7MiB/10002msec) 00:32:56.556 slat (nsec): min=5730, max=57317, avg=12973.59, stdev=9205.81 00:32:56.556 clat (usec): min=11164, max=42242, avg=23361.28, stdev=4256.94 00:32:56.556 lat (usec): min=11172, max=42266, avg=23374.25, stdev=4258.45 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[14877], 5.00th=[16188], 10.00th=[17171], 20.00th=[19792], 00:32:56.556 | 30.00th=[23200], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:32:56.556 | 70.00th=[24773], 80.00th=[25035], 90.00th=[26084], 95.00th=[30540], 00:32:56.556 | 99.00th=[38011], 99.50th=[38536], 99.90th=[41681], 99.95th=[42206], 00:32:56.556 | 99.99th=[42206] 00:32:56.556 bw ( KiB/s): min= 2554, max= 3104, per=4.28%, avg=2734.53, stdev=157.20, samples=19 00:32:56.556 iops : min= 638, max= 776, avg=683.58, stdev=39.33, samples=19 00:32:56.556 lat (msec) : 20=21.44%, 50=78.56% 00:32:56.556 cpu : usr=98.91%, sys=0.81%, ctx=11, majf=0, minf=9 00:32:56.556 IO depths : 1=1.3%, 2=5.7%, 4=13.9%, 8=67.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename0: (groupid=0, jobs=1): err= 0: pid=3341572: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=654, BW=2618KiB/s (2680kB/s)(25.6MiB/10009msec) 00:32:56.556 slat (nsec): min=4313, max=66157, avg=17327.77, stdev=11589.78 00:32:56.556 clat (usec): min=13389, max=38084, avg=24295.61, stdev=2378.94 00:32:56.556 lat (usec): min=13400, max=38115, avg=24312.94, stdev=2379.63 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[16057], 5.00th=[19792], 10.00th=[22938], 20.00th=[23725], 00:32:56.556 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:32:56.556 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26870], 00:32:56.556 | 99.00th=[32900], 99.50th=[33817], 99.90th=[38011], 99.95th=[38011], 00:32:56.556 | 99.99th=[38011] 00:32:56.556 bw ( KiB/s): min= 2432, max= 3024, per=4.10%, avg=2616.95, stdev=125.83, samples=19 00:32:56.556 iops : min= 608, max= 756, avg=654.21, stdev=31.47, samples=19 00:32:56.556 lat (msec) : 20=6.14%, 50=93.86% 00:32:56.556 cpu : usr=98.16%, sys=1.21%, ctx=210, majf=0, minf=9 00:32:56.556 IO depths : 1=4.2%, 2=8.4%, 4=18.1%, 8=59.9%, 16=9.3%, 32=0.0%, >=64=0.0% 00:32:56.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.556 issued rwts: total=6550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.556 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.556 filename1: (groupid=0, jobs=1): err= 0: pid=3341573: Fri Dec 6 18:09:42 2024 00:32:56.556 read: IOPS=713, BW=2853KiB/s (2921kB/s)(27.9MiB/10020msec) 00:32:56.556 slat (nsec): min=5729, max=64707, avg=10278.46, stdev=7890.21 00:32:56.556 clat (usec): min=11614, max=42588, avg=22360.04, stdev=5090.54 00:32:56.556 lat (usec): min=11620, max=42602, avg=22370.32, stdev=5092.36 00:32:56.556 clat percentiles (usec): 00:32:56.556 | 1.00th=[14484], 5.00th=[15533], 10.00th=[16188], 20.00th=[17171], 00:32:56.556 | 30.00th=[19268], 40.00th=[20579], 50.00th=[23462], 60.00th=[24249], 00:32:56.556 | 70.00th=[24511], 80.00th=[25035], 90.00th=[28181], 95.00th=[31327], 00:32:56.556 | 99.00th=[38011], 99.50th=[39060], 99.90th=[40633], 99.95th=[42730], 00:32:56.556 | 99.99th=[42730] 00:32:56.556 bw ( KiB/s): min= 2688, max= 3057, per=4.47%, avg=2853.75, stdev=118.16, samples=20 00:32:56.556 iops : min= 672, max= 764, avg=713.40, stdev=29.51, samples=20 00:32:56.556 lat (msec) : 20=34.59%, 50=65.41% 00:32:56.556 cpu : usr=98.63%, sys=0.96%, ctx=96, majf=0, minf=9 00:32:56.556 IO depths : 1=1.2%, 2=2.4%, 4=8.5%, 8=75.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=7146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341574: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=647, BW=2590KiB/s (2652kB/s)(25.3MiB/10008msec) 00:32:56.557 slat (nsec): min=2886, max=62373, avg=16179.76, stdev=9984.89 00:32:56.557 clat (usec): min=15213, max=35718, avg=24572.35, stdev=962.17 00:32:56.557 lat (usec): min=15219, max=35727, avg=24588.53, stdev=961.73 00:32:56.557 clat percentiles (usec): 00:32:56.557 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.557 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:32:56.557 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:32:56.557 | 99.00th=[26608], 99.50th=[26870], 99.90th=[35914], 99.95th=[35914], 00:32:56.557 | 99.99th=[35914] 00:32:56.557 bw ( KiB/s): min= 2432, max= 2693, per=4.05%, avg=2586.89, stdev=81.20, samples=19 00:32:56.557 iops : min= 608, max= 673, avg=646.68, stdev=20.29, samples=19 00:32:56.557 lat (msec) : 20=0.25%, 50=99.75% 00:32:56.557 cpu : usr=99.02%, sys=0.70%, ctx=26, majf=0, minf=9 00:32:56.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341575: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=647, BW=2590KiB/s (2652kB/s)(25.3MiB/10007msec) 00:32:56.557 slat (nsec): min=2932, max=57506, avg=13237.28, stdev=9067.71 00:32:56.557 clat (usec): min=19038, max=31133, avg=24593.01, stdev=776.46 00:32:56.557 lat (usec): min=19046, max=31143, avg=24606.25, stdev=775.49 00:32:56.557 clat percentiles (usec): 00:32:56.557 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.557 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:32:56.557 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.557 | 99.00th=[26608], 99.50th=[26870], 99.90th=[31065], 99.95th=[31065], 00:32:56.557 | 99.99th=[31065] 00:32:56.557 bw ( KiB/s): min= 2432, max= 2688, per=4.05%, avg=2586.32, stdev=68.18, samples=19 00:32:56.557 iops : min= 608, max= 672, avg=646.53, stdev=17.02, samples=19 00:32:56.557 lat (msec) : 20=0.25%, 50=99.75% 00:32:56.557 cpu : usr=98.65%, sys=0.96%, ctx=150, majf=0, minf=9 00:32:56.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341576: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=650, BW=2601KiB/s (2663kB/s)(25.4MiB/10015msec) 00:32:56.557 slat (nsec): min=5779, max=58166, avg=11813.81, stdev=8438.67 00:32:56.557 clat (usec): min=12058, max=27405, avg=24507.49, stdev=1168.95 00:32:56.557 lat (usec): min=12064, max=27412, avg=24519.30, stdev=1168.47 00:32:56.557 clat percentiles (usec): 00:32:56.557 | 1.00th=[21890], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.557 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.557 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.557 | 99.00th=[26346], 99.50th=[26870], 99.90th=[27395], 99.95th=[27395], 00:32:56.557 | 99.99th=[27395] 00:32:56.557 bw ( KiB/s): min= 2432, max= 2688, per=4.07%, avg=2599.79, stdev=74.36, samples=19 00:32:56.557 iops : min= 608, max= 672, avg=649.89, stdev=18.58, samples=19 00:32:56.557 lat (msec) : 20=0.98%, 50=99.02% 00:32:56.557 cpu : usr=98.46%, sys=1.07%, ctx=142, majf=0, minf=9 00:32:56.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341577: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=650, BW=2602KiB/s (2664kB/s)(25.4MiB/10012msec) 00:32:56.557 slat (nsec): min=2895, max=56659, avg=14275.31, stdev=9270.14 00:32:56.557 clat (usec): min=9091, max=27443, avg=24476.86, stdev=1324.31 00:32:56.557 lat (usec): min=9094, max=27451, avg=24491.14, stdev=1324.56 00:32:56.557 clat percentiles (usec): 00:32:56.557 | 1.00th=[21890], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.557 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:32:56.557 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.557 | 99.00th=[26346], 99.50th=[26870], 99.90th=[27395], 99.95th=[27395], 00:32:56.557 | 99.99th=[27395] 00:32:56.557 bw ( KiB/s): min= 2432, max= 2688, per=4.07%, avg=2598.84, stdev=74.35, samples=19 00:32:56.557 iops : min= 608, max= 672, avg=649.58, stdev=18.58, samples=19 00:32:56.557 lat (msec) : 10=0.21%, 20=0.55%, 50=99.23% 00:32:56.557 cpu : usr=98.71%, sys=0.89%, ctx=51, majf=0, minf=9 00:32:56.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341578: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=647, BW=2591KiB/s (2653kB/s)(25.3MiB/10005msec) 00:32:56.557 slat (nsec): min=4238, max=63419, avg=18351.53, stdev=11034.14 00:32:56.557 clat (usec): min=15177, max=32590, avg=24531.37, stdev=887.26 00:32:56.557 lat (usec): min=15184, max=32603, avg=24549.72, stdev=886.97 00:32:56.557 clat percentiles (usec): 00:32:56.557 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:32:56.557 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.557 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25560], 00:32:56.557 | 99.00th=[26346], 99.50th=[26870], 99.90th=[32637], 99.95th=[32637], 00:32:56.557 | 99.99th=[32637] 00:32:56.557 bw ( KiB/s): min= 2432, max= 2693, per=4.05%, avg=2587.68, stdev=80.56, samples=19 00:32:56.557 iops : min= 608, max= 673, avg=646.89, stdev=20.13, samples=19 00:32:56.557 lat (msec) : 20=0.25%, 50=99.75% 00:32:56.557 cpu : usr=98.86%, sys=0.89%, ctx=10, majf=0, minf=9 00:32:56.557 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341579: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=650, BW=2601KiB/s (2663kB/s)(25.4MiB/10016msec) 00:32:56.557 slat (nsec): min=5758, max=53829, avg=9373.29, stdev=6315.58 00:32:56.557 clat (usec): min=12075, max=35557, avg=24530.90, stdev=1431.92 00:32:56.557 lat (usec): min=12084, max=35563, avg=24540.27, stdev=1431.54 00:32:56.557 clat percentiles (usec): 00:32:56.557 | 1.00th=[16712], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:32:56.557 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:32:56.557 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.557 | 99.00th=[26870], 99.50th=[27395], 99.90th=[33162], 99.95th=[34866], 00:32:56.557 | 99.99th=[35390] 00:32:56.557 bw ( KiB/s): min= 2432, max= 2688, per=4.07%, avg=2599.79, stdev=74.36, samples=19 00:32:56.557 iops : min= 608, max= 672, avg=649.89, stdev=18.58, samples=19 00:32:56.557 lat (msec) : 20=1.47%, 50=98.53% 00:32:56.557 cpu : usr=98.67%, sys=0.92%, ctx=36, majf=0, minf=9 00:32:56.557 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.557 issued rwts: total=6512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.557 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.557 filename1: (groupid=0, jobs=1): err= 0: pid=3341580: Fri Dec 6 18:09:42 2024 00:32:56.557 read: IOPS=797, BW=3191KiB/s (3268kB/s)(31.2MiB/10013msec) 00:32:56.557 slat (nsec): min=2876, max=47022, avg=6762.42, stdev=2246.74 00:32:56.557 clat (usec): min=8483, max=39287, avg=20007.10, stdev=4549.19 00:32:56.558 lat (usec): min=8490, max=39294, avg=20013.86, stdev=4549.64 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[11731], 5.00th=[14615], 10.00th=[15664], 20.00th=[16188], 00:32:56.558 | 30.00th=[16581], 40.00th=[16909], 50.00th=[17695], 60.00th=[21890], 00:32:56.558 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:32:56.558 | 99.00th=[33162], 99.50th=[34866], 99.90th=[38536], 99.95th=[39060], 00:32:56.558 | 99.99th=[39060] 00:32:56.558 bw ( KiB/s): min= 2560, max= 3888, per=5.01%, avg=3199.16, stdev=411.68, samples=19 00:32:56.558 iops : min= 640, max= 972, avg=799.74, stdev=102.89, samples=19 00:32:56.558 lat (msec) : 10=0.16%, 20=55.93%, 50=43.90% 00:32:56.558 cpu : usr=98.76%, sys=0.98%, ctx=13, majf=0, minf=9 00:32:56.558 IO depths : 1=1.8%, 2=3.9%, 4=12.4%, 8=71.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=90.6%, 8=4.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=7988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341581: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=647, BW=2590KiB/s (2652kB/s)(25.3MiB/10008msec) 00:32:56.558 slat (nsec): min=2828, max=51750, avg=10112.24, stdev=6219.35 00:32:56.558 clat (usec): min=18861, max=32315, avg=24623.86, stdev=807.17 00:32:56.558 lat (usec): min=18868, max=32325, avg=24633.97, stdev=806.51 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:32:56.558 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24511], 60.00th=[24773], 00:32:56.558 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[25822], 00:32:56.558 | 99.00th=[26608], 99.50th=[26870], 99.90th=[32375], 99.95th=[32375], 00:32:56.558 | 99.99th=[32375] 00:32:56.558 bw ( KiB/s): min= 2432, max= 2688, per=4.05%, avg=2586.00, stdev=80.55, samples=19 00:32:56.558 iops : min= 608, max= 672, avg=646.42, stdev=20.13, samples=19 00:32:56.558 lat (msec) : 20=0.25%, 50=99.75% 00:32:56.558 cpu : usr=98.96%, sys=0.74%, ctx=60, majf=0, minf=9 00:32:56.558 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341582: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=646, BW=2584KiB/s (2646kB/s)(25.3MiB/10014msec) 00:32:56.558 slat (nsec): min=5776, max=61569, avg=15614.03, stdev=8585.49 00:32:56.558 clat (usec): min=18756, max=39032, avg=24622.13, stdev=1195.58 00:32:56.558 lat (usec): min=18769, max=39039, avg=24637.75, stdev=1195.36 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23987], 20.00th=[23987], 00:32:56.558 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:32:56.558 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25297], 95.00th=[25822], 00:32:56.558 | 99.00th=[30016], 99.50th=[31851], 99.90th=[38536], 99.95th=[38536], 00:32:56.558 | 99.99th=[39060] 00:32:56.558 bw ( KiB/s): min= 2432, max= 2688, per=4.04%, avg=2581.79, stdev=63.77, samples=19 00:32:56.558 iops : min= 608, max= 672, avg=645.37, stdev=15.90, samples=19 00:32:56.558 lat (msec) : 20=0.49%, 50=99.51% 00:32:56.558 cpu : usr=98.72%, sys=0.93%, ctx=80, majf=0, minf=9 00:32:56.558 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=6470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341583: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=647, BW=2589KiB/s (2651kB/s)(25.3MiB/10003msec) 00:32:56.558 slat (nsec): min=4304, max=68120, avg=17795.41, stdev=11166.34 00:32:56.558 clat (usec): min=7322, max=64399, avg=24550.29, stdev=2378.08 00:32:56.558 lat (usec): min=7329, max=64411, avg=24568.09, stdev=2377.98 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[16581], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:32:56.558 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24511], 00:32:56.558 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:32:56.558 | 99.00th=[33424], 99.50th=[33817], 99.90th=[50070], 99.95th=[50070], 00:32:56.558 | 99.99th=[64226] 00:32:56.558 bw ( KiB/s): min= 2432, max= 2704, per=4.05%, avg=2584.89, stdev=72.44, samples=19 00:32:56.558 iops : min= 608, max= 676, avg=646.16, stdev=18.15, samples=19 00:32:56.558 lat (msec) : 10=0.25%, 20=2.07%, 50=97.67%, 100=0.02% 00:32:56.558 cpu : usr=99.00%, sys=0.72%, ctx=37, majf=0, minf=9 00:32:56.558 IO depths : 1=5.5%, 2=11.3%, 4=23.5%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=6474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341584: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=705, BW=2823KiB/s (2891kB/s)(27.6MiB/10020msec) 00:32:56.558 slat (nsec): min=5728, max=64597, avg=11230.43, stdev=8615.25 00:32:56.558 clat (usec): min=10986, max=41415, avg=22582.49, stdev=4575.36 00:32:56.558 lat (usec): min=10994, max=41423, avg=22593.72, stdev=4576.87 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[13960], 5.00th=[15795], 10.00th=[16450], 20.00th=[17695], 00:32:56.558 | 30.00th=[20055], 40.00th=[21627], 50.00th=[23987], 60.00th=[24249], 00:32:56.558 | 70.00th=[24511], 80.00th=[25035], 90.00th=[27132], 95.00th=[30540], 00:32:56.558 | 99.00th=[35914], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:32:56.558 | 99.99th=[41157] 00:32:56.558 bw ( KiB/s): min= 2522, max= 3072, per=4.42%, avg=2824.20, stdev=144.69, samples=20 00:32:56.558 iops : min= 630, max= 768, avg=706.00, stdev=36.22, samples=20 00:32:56.558 lat (msec) : 20=29.62%, 50=70.38% 00:32:56.558 cpu : usr=98.75%, sys=0.96%, ctx=43, majf=0, minf=9 00:32:56.558 IO depths : 1=1.8%, 2=3.5%, 4=10.1%, 8=72.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=7072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341585: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=656, BW=2627KiB/s (2690kB/s)(25.7MiB/10009msec) 00:32:56.558 slat (nsec): min=3373, max=63452, avg=12147.30, stdev=9369.49 00:32:56.558 clat (usec): min=11028, max=53033, avg=24291.76, stdev=4042.77 00:32:56.558 lat (usec): min=11034, max=53043, avg=24303.90, stdev=4042.84 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[14484], 5.00th=[17957], 10.00th=[19530], 20.00th=[21103], 00:32:56.558 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.558 | 70.00th=[25035], 80.00th=[25822], 90.00th=[28181], 95.00th=[30802], 00:32:56.558 | 99.00th=[36963], 99.50th=[39584], 99.90th=[53216], 99.95th=[53216], 00:32:56.558 | 99.99th=[53216] 00:32:56.558 bw ( KiB/s): min= 2384, max= 2768, per=4.11%, avg=2624.79, stdev=94.78, samples=19 00:32:56.558 iops : min= 596, max= 692, avg=656.16, stdev=23.69, samples=19 00:32:56.558 lat (msec) : 20=13.33%, 50=86.43%, 100=0.24% 00:32:56.558 cpu : usr=98.98%, sys=0.76%, ctx=13, majf=0, minf=9 00:32:56.558 IO depths : 1=0.6%, 2=1.2%, 4=5.1%, 8=78.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=89.4%, 8=7.9%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=6574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341586: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=656, BW=2625KiB/s (2688kB/s)(25.7MiB/10007msec) 00:32:56.558 slat (nsec): min=2993, max=62995, avg=12732.99, stdev=9219.18 00:32:56.558 clat (usec): min=12182, max=52242, avg=24285.39, stdev=2650.79 00:32:56.558 lat (usec): min=12188, max=52251, avg=24298.12, stdev=2651.59 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[15795], 5.00th=[17957], 10.00th=[23462], 20.00th=[23987], 00:32:56.558 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:32:56.558 | 70.00th=[24773], 80.00th=[25035], 90.00th=[25560], 95.00th=[26084], 00:32:56.558 | 99.00th=[33424], 99.50th=[33817], 99.90th=[40109], 99.95th=[52167], 00:32:56.558 | 99.99th=[52167] 00:32:56.558 bw ( KiB/s): min= 2432, max= 2784, per=4.08%, avg=2608.42, stdev=82.72, samples=19 00:32:56.558 iops : min= 608, max= 696, avg=652.00, stdev=20.70, samples=19 00:32:56.558 lat (msec) : 20=6.53%, 50=93.39%, 100=0.08% 00:32:56.558 cpu : usr=98.78%, sys=0.80%, ctx=86, majf=0, minf=9 00:32:56.558 IO depths : 1=2.1%, 2=4.5%, 4=10.5%, 8=69.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:32:56.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 complete : 0=0.0%, 4=89.8%, 8=7.3%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.558 issued rwts: total=6568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.558 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.558 filename2: (groupid=0, jobs=1): err= 0: pid=3341587: Fri Dec 6 18:09:42 2024 00:32:56.558 read: IOPS=710, BW=2840KiB/s (2909kB/s)(27.8MiB/10004msec) 00:32:56.558 slat (nsec): min=3000, max=58366, avg=11582.81, stdev=8183.39 00:32:56.558 clat (usec): min=9556, max=50733, avg=22449.33, stdev=3995.95 00:32:56.558 lat (usec): min=9562, max=50742, avg=22460.91, stdev=3998.00 00:32:56.558 clat percentiles (usec): 00:32:56.558 | 1.00th=[12387], 5.00th=[16057], 10.00th=[16581], 20.00th=[17433], 00:32:56.558 | 30.00th=[21365], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:32:56.558 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[25822], 00:32:56.558 | 99.00th=[29754], 99.50th=[33817], 99.90th=[50594], 99.95th=[50594], 00:32:56.558 | 99.99th=[50594] 00:32:56.558 bw ( KiB/s): min= 2432, max= 3664, per=4.46%, avg=2849.89, stdev=339.15, samples=19 00:32:56.558 iops : min= 608, max= 916, avg=712.42, stdev=84.80, samples=19 00:32:56.558 lat (msec) : 10=0.06%, 20=25.83%, 50=73.89%, 100=0.23% 00:32:56.558 cpu : usr=99.02%, sys=0.69%, ctx=67, majf=0, minf=9 00:32:56.558 IO depths : 1=2.0%, 2=4.8%, 4=13.0%, 8=68.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:56.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.559 complete : 0=0.0%, 4=91.3%, 8=4.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.559 issued rwts: total=7104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.559 filename2: (groupid=0, jobs=1): err= 0: pid=3341588: Fri Dec 6 18:09:42 2024 00:32:56.559 read: IOPS=667, BW=2672KiB/s (2736kB/s)(26.1MiB/10004msec) 00:32:56.559 slat (nsec): min=3197, max=61263, avg=13067.11, stdev=9767.07 00:32:56.559 clat (usec): min=5178, max=56594, avg=23875.47, stdev=4185.12 00:32:56.559 lat (usec): min=5184, max=56603, avg=23888.53, stdev=4185.79 00:32:56.559 clat percentiles (usec): 00:32:56.559 | 1.00th=[15139], 5.00th=[16909], 10.00th=[18744], 20.00th=[20579], 00:32:56.559 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:32:56.559 | 70.00th=[24773], 80.00th=[25035], 90.00th=[27919], 95.00th=[32375], 00:32:56.559 | 99.00th=[36963], 99.50th=[38536], 99.90th=[44827], 99.95th=[44827], 00:32:56.559 | 99.99th=[56361] 00:32:56.559 bw ( KiB/s): min= 2452, max= 3072, per=4.18%, avg=2668.47, stdev=129.68, samples=19 00:32:56.559 iops : min= 613, max= 768, avg=667.05, stdev=32.39, samples=19 00:32:56.559 lat (msec) : 10=0.03%, 20=15.21%, 50=84.74%, 100=0.03% 00:32:56.559 cpu : usr=98.90%, sys=0.85%, ctx=7, majf=0, minf=9 00:32:56.559 IO depths : 1=0.3%, 2=2.4%, 4=11.0%, 8=72.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:32:56.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.559 complete : 0=0.0%, 4=90.7%, 8=5.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:56.559 issued rwts: total=6682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:56.559 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:56.559 00:32:56.559 Run status group 0 (all jobs): 00:32:56.559 READ: bw=62.4MiB/s (65.4MB/s), 2584KiB/s-3191KiB/s (2646kB/s-3268kB/s), io=625MiB (655MB), run=10002-10022msec 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 bdev_null0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 [2024-12-06 18:09:42.733542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 bdev_null1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:56.559 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:56.559 { 00:32:56.559 "params": { 00:32:56.559 "name": "Nvme$subsystem", 00:32:56.559 "trtype": "$TEST_TRANSPORT", 00:32:56.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.559 "adrfam": "ipv4", 00:32:56.559 "trsvcid": "$NVMF_PORT", 00:32:56.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.560 "hdgst": ${hdgst:-false}, 00:32:56.560 "ddgst": ${ddgst:-false} 00:32:56.560 }, 00:32:56.560 "method": "bdev_nvme_attach_controller" 00:32:56.560 } 00:32:56.560 EOF 00:32:56.560 )") 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:56.560 { 00:32:56.560 "params": { 00:32:56.560 "name": "Nvme$subsystem", 00:32:56.560 "trtype": "$TEST_TRANSPORT", 00:32:56.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:56.560 "adrfam": "ipv4", 00:32:56.560 "trsvcid": "$NVMF_PORT", 00:32:56.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:56.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:56.560 "hdgst": ${hdgst:-false}, 00:32:56.560 "ddgst": ${ddgst:-false} 00:32:56.560 }, 00:32:56.560 "method": "bdev_nvme_attach_controller" 00:32:56.560 } 00:32:56.560 EOF 00:32:56.560 )") 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:56.560 "params": { 00:32:56.560 "name": "Nvme0", 00:32:56.560 "trtype": "tcp", 00:32:56.560 "traddr": "10.0.0.2", 00:32:56.560 "adrfam": "ipv4", 00:32:56.560 "trsvcid": "4420", 00:32:56.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:56.560 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:56.560 "hdgst": false, 00:32:56.560 "ddgst": false 00:32:56.560 }, 00:32:56.560 "method": "bdev_nvme_attach_controller" 00:32:56.560 },{ 00:32:56.560 "params": { 00:32:56.560 "name": "Nvme1", 00:32:56.560 "trtype": "tcp", 00:32:56.560 "traddr": "10.0.0.2", 00:32:56.560 "adrfam": "ipv4", 00:32:56.560 "trsvcid": "4420", 00:32:56.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:56.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:56.560 "hdgst": false, 00:32:56.560 "ddgst": false 00:32:56.560 }, 00:32:56.560 "method": "bdev_nvme_attach_controller" 00:32:56.560 }' 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:56.560 18:09:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:56.560 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:56.560 ... 00:32:56.560 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:56.560 ... 00:32:56.560 fio-3.35 00:32:56.560 Starting 4 threads 00:33:01.841 00:33:01.841 filename0: (groupid=0, jobs=1): err= 0: pid=3344200: Fri Dec 6 18:09:48 2024 00:33:01.841 read: IOPS=3033, BW=23.7MiB/s (24.9MB/s)(119MiB/5002msec) 00:33:01.841 slat (nsec): min=4197, max=32546, avg=7392.52, stdev=2204.51 00:33:01.841 clat (usec): min=1176, max=4634, avg=2617.48, stdev=361.09 00:33:01.841 lat (usec): min=1183, max=4643, avg=2624.87, stdev=361.19 00:33:01.841 clat percentiles (usec): 00:33:01.841 | 1.00th=[ 1827], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2343], 00:33:01.841 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2704], 00:33:01.841 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2966], 95.00th=[ 3294], 00:33:01.841 | 99.00th=[ 3785], 99.50th=[ 4047], 99.90th=[ 4293], 99.95th=[ 4424], 00:33:01.841 | 99.99th=[ 4621] 00:33:01.841 bw ( KiB/s): min=23840, max=24672, per=25.86%, avg=24239.89, stdev=313.57, samples=9 00:33:01.841 iops : min= 2980, max= 3084, avg=3029.89, stdev=39.23, samples=9 00:33:01.841 lat (msec) : 2=3.61%, 4=95.79%, 10=0.60% 00:33:01.841 cpu : usr=96.58%, sys=3.18%, ctx=8, majf=0, minf=9 00:33:01.841 IO depths : 1=0.2%, 2=1.8%, 4=68.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 issued rwts: total=15175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:01.841 filename0: (groupid=0, jobs=1): err= 0: pid=3344201: Fri Dec 6 18:09:48 2024 00:33:01.841 read: IOPS=2854, BW=22.3MiB/s (23.4MB/s)(112MiB/5001msec) 00:33:01.841 slat (nsec): min=4102, max=31354, avg=6931.44, stdev=2122.22 00:33:01.841 clat (usec): min=1140, max=7406, avg=2783.99, stdev=388.08 00:33:01.841 lat (usec): min=1149, max=7418, avg=2790.93, stdev=387.99 00:33:01.841 clat percentiles (usec): 00:33:01.841 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2573], 00:33:01.841 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:33:01.841 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 3195], 95.00th=[ 3654], 00:33:01.841 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[ 5932], 00:33:01.841 | 99.99th=[ 7373] 00:33:01.841 bw ( KiB/s): min=22048, max=23264, per=24.45%, avg=22918.78, stdev=381.94, samples=9 00:33:01.841 iops : min= 2756, max= 2908, avg=2864.78, stdev=47.75, samples=9 00:33:01.841 lat (msec) : 2=0.90%, 4=96.88%, 10=2.22% 00:33:01.841 cpu : usr=96.86%, sys=2.90%, ctx=6, majf=0, minf=9 00:33:01.841 IO depths : 1=0.1%, 2=0.4%, 4=71.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 issued rwts: total=14275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:01.841 filename1: (groupid=0, jobs=1): err= 0: pid=3344202: Fri Dec 6 18:09:48 2024 00:33:01.841 read: IOPS=2913, BW=22.8MiB/s (23.9MB/s)(114MiB/5001msec) 00:33:01.841 slat (nsec): min=4184, max=33086, avg=6653.66, stdev=2171.98 00:33:01.841 clat (usec): min=910, max=5117, avg=2727.62, stdev=360.94 00:33:01.841 lat (usec): min=916, max=5123, avg=2734.27, stdev=360.93 00:33:01.841 clat percentiles (usec): 00:33:01.841 | 1.00th=[ 1876], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2507], 00:33:01.841 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:33:01.841 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 3032], 95.00th=[ 3392], 00:33:01.841 | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4621], 99.95th=[ 4752], 00:33:01.841 | 99.99th=[ 5080] 00:33:01.841 bw ( KiB/s): min=22272, max=24928, per=24.77%, avg=23217.78, stdev=739.97, samples=9 00:33:01.841 iops : min= 2784, max= 3116, avg=2902.22, stdev=92.50, samples=9 00:33:01.841 lat (usec) : 1000=0.02% 00:33:01.841 lat (msec) : 2=1.70%, 4=97.07%, 10=1.21% 00:33:01.841 cpu : usr=96.84%, sys=2.94%, ctx=5, majf=0, minf=9 00:33:01.841 IO depths : 1=0.1%, 2=1.6%, 4=70.9%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 issued rwts: total=14570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:01.841 filename1: (groupid=0, jobs=1): err= 0: pid=3344203: Fri Dec 6 18:09:48 2024 00:33:01.841 read: IOPS=2917, BW=22.8MiB/s (23.9MB/s)(114MiB/5002msec) 00:33:01.841 slat (nsec): min=2902, max=32881, avg=6204.05, stdev=1706.19 00:33:01.841 clat (usec): min=1335, max=4674, avg=2725.73, stdev=348.79 00:33:01.841 lat (usec): min=1341, max=4680, avg=2731.93, stdev=348.80 00:33:01.841 clat percentiles (usec): 00:33:01.841 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2540], 00:33:01.841 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2737], 00:33:01.841 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 2999], 95.00th=[ 3425], 00:33:01.841 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4490], 99.95th=[ 4555], 00:33:01.841 | 99.99th=[ 4686] 00:33:01.841 bw ( KiB/s): min=22176, max=23856, per=24.99%, avg=23429.33, stdev=516.05, samples=9 00:33:01.841 iops : min= 2772, max= 2982, avg=2928.67, stdev=64.51, samples=9 00:33:01.841 lat (msec) : 2=1.16%, 4=97.43%, 10=1.40% 00:33:01.841 cpu : usr=95.48%, sys=3.58%, ctx=271, majf=0, minf=9 00:33:01.841 IO depths : 1=0.1%, 2=0.2%, 4=70.1%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:01.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:01.841 issued rwts: total=14594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:01.841 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:01.841 00:33:01.841 Run status group 0 (all jobs): 00:33:01.841 READ: bw=91.5MiB/s (96.0MB/s), 22.3MiB/s-23.7MiB/s (23.4MB/s-24.9MB/s), io=458MiB (480MB), run=5001-5002msec 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 00:33:01.841 real 0m23.837s 00:33:01.841 user 5m6.101s 00:33:01.841 sys 0m4.184s 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 ************************************ 00:33:01.841 END TEST fio_dif_rand_params 00:33:01.841 ************************************ 00:33:01.841 18:09:48 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:01.841 18:09:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:01.841 18:09:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 ************************************ 00:33:01.841 START TEST fio_dif_digest 00:33:01.841 ************************************ 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 bdev_null0 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:01.841 [2024-12-06 18:09:49.028207] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.841 { 00:33:01.841 "params": { 00:33:01.841 "name": "Nvme$subsystem", 00:33:01.841 "trtype": "$TEST_TRANSPORT", 00:33:01.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.841 "adrfam": "ipv4", 00:33:01.841 "trsvcid": "$NVMF_PORT", 00:33:01.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.841 "hdgst": ${hdgst:-false}, 00:33:01.841 "ddgst": ${ddgst:-false} 00:33:01.841 }, 00:33:01.841 "method": "bdev_nvme_attach_controller" 00:33:01.841 } 00:33:01.841 EOF 00:33:01.841 )") 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.841 "params": { 00:33:01.841 "name": "Nvme0", 00:33:01.841 "trtype": "tcp", 00:33:01.841 "traddr": "10.0.0.2", 00:33:01.841 "adrfam": "ipv4", 00:33:01.841 "trsvcid": "4420", 00:33:01.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.841 "hdgst": true, 00:33:01.841 "ddgst": true 00:33:01.841 }, 00:33:01.841 "method": "bdev_nvme_attach_controller" 00:33:01.841 }' 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:01.841 18:09:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:01.841 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:01.841 ... 00:33:01.841 fio-3.35 00:33:01.841 Starting 3 threads 00:33:14.125 00:33:14.125 filename0: (groupid=0, jobs=1): err= 0: pid=3345623: Fri Dec 6 18:09:59 2024 00:33:14.125 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(374MiB/10044msec) 00:33:14.125 slat (nsec): min=4359, max=36605, avg=7320.93, stdev=1328.74 00:33:14.125 clat (usec): min=7183, max=51978, avg=10061.95, stdev=1391.32 00:33:14.125 lat (usec): min=7189, max=51986, avg=10069.27, stdev=1391.40 00:33:14.125 clat percentiles (usec): 00:33:14.125 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9372], 00:33:14.125 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:33:14.125 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:33:14.125 | 99.00th=[12256], 99.50th=[12387], 99.90th=[19530], 99.95th=[47973], 00:33:14.125 | 99.99th=[52167] 00:33:14.125 bw ( KiB/s): min=36608, max=40960, per=34.13%, avg=38220.80, stdev=1027.70, samples=20 00:33:14.125 iops : min= 286, max= 320, avg=298.60, stdev= 8.03, samples=20 00:33:14.125 lat (msec) : 10=47.86%, 20=52.07%, 50=0.03%, 100=0.03% 00:33:14.125 cpu : usr=96.01%, sys=3.74%, ctx=16, majf=0, minf=72 00:33:14.125 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:14.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.125 issued rwts: total=2988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:14.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:14.126 filename0: (groupid=0, jobs=1): err= 0: pid=3345624: Fri Dec 6 18:09:59 2024 00:33:14.126 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10048msec) 00:33:14.126 slat (nsec): min=4495, max=43810, avg=7324.12, stdev=1649.83 00:33:14.126 clat (usec): min=6483, max=49287, avg=10274.84, stdev=1367.94 00:33:14.126 lat (usec): min=6491, max=49294, avg=10282.17, stdev=1367.94 00:33:14.126 clat percentiles (usec): 00:33:14.126 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:33:14.126 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:33:14.126 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11731], 00:33:14.126 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13960], 99.95th=[48497], 00:33:14.126 | 99.99th=[49546] 00:33:14.126 bw ( KiB/s): min=36096, max=39936, per=33.42%, avg=37430.95, stdev=855.81, samples=20 00:33:14.126 iops : min= 282, max= 312, avg=292.40, stdev= 6.67, samples=20 00:33:14.126 lat (msec) : 10=39.43%, 20=60.51%, 50=0.07% 00:33:14.126 cpu : usr=95.61%, sys=4.13%, ctx=14, majf=0, minf=160 00:33:14.126 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.126 issued rwts: total=2927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:14.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:14.126 filename0: (groupid=0, jobs=1): err= 0: pid=3345625: Fri Dec 6 18:09:59 2024 00:33:14.126 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(360MiB/10046msec) 00:33:14.126 slat (nsec): min=4398, max=23998, avg=7134.69, stdev=1313.03 00:33:14.126 clat (usec): min=7208, max=47619, avg=10454.83, stdev=1325.88 00:33:14.126 lat (usec): min=7214, max=47626, avg=10461.97, stdev=1325.87 00:33:14.126 clat percentiles (usec): 00:33:14.126 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:33:14.126 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:33:14.126 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:33:14.126 | 99.00th=[12518], 99.50th=[12911], 99.90th=[14746], 99.95th=[45876], 00:33:14.126 | 99.99th=[47449] 00:33:14.126 bw ( KiB/s): min=36096, max=37632, per=32.85%, avg=36787.20, stdev=512.67, samples=20 00:33:14.126 iops : min= 282, max= 294, avg=287.40, stdev= 4.01, samples=20 00:33:14.126 lat (msec) : 10=30.15%, 20=69.78%, 50=0.07% 00:33:14.126 cpu : usr=95.83%, sys=3.92%, ctx=16, majf=0, minf=154 00:33:14.126 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:14.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:14.126 issued rwts: total=2876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:14.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:14.126 00:33:14.126 Run status group 0 (all jobs): 00:33:14.126 READ: bw=109MiB/s (115MB/s), 35.8MiB/s-37.2MiB/s (37.5MB/s-39.0MB/s), io=1099MiB (1152MB), run=10044-10048msec 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.126 18:09:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:14.126 18:10:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.126 00:33:14.126 real 0m11.008s 00:33:14.126 user 0m42.885s 00:33:14.126 sys 0m1.491s 00:33:14.126 18:10:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.126 18:10:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:14.126 ************************************ 00:33:14.126 END TEST fio_dif_digest 00:33:14.126 ************************************ 00:33:14.126 18:10:00 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:14.126 18:10:00 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.126 rmmod nvme_tcp 00:33:14.126 rmmod nvme_fabrics 00:33:14.126 rmmod nvme_keyring 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3333786 ']' 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3333786 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3333786 ']' 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3333786 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3333786 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3333786' 00:33:14.126 killing process with pid 3333786 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3333786 00:33:14.126 18:10:00 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3333786 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:14.126 18:10:00 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:14.695 Waiting for block devices as requested 00:33:14.695 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:14.695 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:14.695 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:14.953 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:14.953 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:14.953 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:14.953 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:15.212 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:15.212 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:15.212 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:15.473 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:15.473 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:15.473 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:15.473 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:15.473 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:15.731 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:15.731 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.990 18:10:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.990 18:10:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:15.990 18:10:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.527 18:10:05 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:18.527 00:33:18.527 real 1m12.170s 00:33:18.527 user 7m42.728s 00:33:18.527 sys 0m17.698s 00:33:18.527 18:10:05 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:18.527 18:10:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:18.527 ************************************ 00:33:18.527 END TEST nvmf_dif 00:33:18.527 ************************************ 00:33:18.527 18:10:05 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:18.527 18:10:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:18.527 18:10:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:18.527 18:10:05 -- common/autotest_common.sh@10 -- # set +x 00:33:18.527 ************************************ 00:33:18.527 START TEST nvmf_abort_qd_sizes 00:33:18.527 ************************************ 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:18.527 * Looking for test storage... 00:33:18.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.527 --rc genhtml_branch_coverage=1 00:33:18.527 --rc genhtml_function_coverage=1 00:33:18.527 --rc genhtml_legend=1 00:33:18.527 --rc geninfo_all_blocks=1 00:33:18.527 --rc geninfo_unexecuted_blocks=1 00:33:18.527 00:33:18.527 ' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.527 --rc genhtml_branch_coverage=1 00:33:18.527 --rc genhtml_function_coverage=1 00:33:18.527 --rc genhtml_legend=1 00:33:18.527 --rc geninfo_all_blocks=1 00:33:18.527 --rc geninfo_unexecuted_blocks=1 00:33:18.527 00:33:18.527 ' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.527 --rc genhtml_branch_coverage=1 00:33:18.527 --rc genhtml_function_coverage=1 00:33:18.527 --rc genhtml_legend=1 00:33:18.527 --rc geninfo_all_blocks=1 00:33:18.527 --rc geninfo_unexecuted_blocks=1 00:33:18.527 00:33:18.527 ' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:18.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:18.527 --rc genhtml_branch_coverage=1 00:33:18.527 --rc genhtml_function_coverage=1 00:33:18.527 --rc genhtml_legend=1 00:33:18.527 --rc geninfo_all_blocks=1 00:33:18.527 --rc geninfo_unexecuted_blocks=1 00:33:18.527 00:33:18.527 ' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:18.527 18:10:05 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:18.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:18.528 18:10:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.811 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:23.812 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:23.812 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:23.812 Found net devices under 0000:31:00.0: cvl_0_0 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:23.812 Found net devices under 0000:31:00.1: cvl_0_1 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.812 18:10:10 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:33:23.812 00:33:23.812 --- 10.0.0.2 ping statistics --- 00:33:23.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.812 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:33:23.812 00:33:23.812 --- 10.0.0.1 ping statistics --- 00:33:23.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.812 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:23.812 18:10:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:25.720 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:25.720 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:25.721 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:25.980 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:25.980 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:25.980 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:25.981 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:25.981 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:25.981 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3355586 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3355586 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3355586 ']' 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.240 18:10:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.240 [2024-12-06 18:10:14.024025] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:33:26.241 [2024-12-06 18:10:14.024072] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.501 [2024-12-06 18:10:14.099243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.501 [2024-12-06 18:10:14.130834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.501 [2024-12-06 18:10:14.130866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.501 [2024-12-06 18:10:14.130872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.501 [2024-12-06 18:10:14.130877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.501 [2024-12-06 18:10:14.130881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.501 [2024-12-06 18:10:14.132395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.501 [2024-12-06 18:10:14.132545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.501 [2024-12-06 18:10:14.132694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.501 [2024-12-06 18:10:14.132696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.501 18:10:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:26.501 ************************************ 00:33:26.501 START TEST spdk_target_abort 00:33:26.501 ************************************ 00:33:26.501 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:33:26.501 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:26.501 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:26.501 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.501 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:26.760 spdk_targetn1 00:33:26.760 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.760 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:26.761 [2024-12-06 18:10:14.559493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.761 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:27.021 [2024-12-06 18:10:14.595992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:27.021 18:10:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:27.021 [2024-12-06 18:10:14.796696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:192 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:33:27.021 [2024-12-06 18:10:14.796731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0019 p:1 m:0 dnr:0 00:33:27.021 [2024-12-06 18:10:14.805674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:488 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:33:27.021 [2024-12-06 18:10:14.805694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003f p:1 m:0 dnr:0 00:33:27.021 [2024-12-06 18:10:14.820732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1024 len:8 PRP1 0x200004abe000 PRP2 0x0 00:33:27.021 [2024-12-06 18:10:14.820754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0082 p:1 m:0 dnr:0 00:33:27.281 [2024-12-06 18:10:14.868703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2576 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:33:27.281 [2024-12-06 18:10:14.868727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:27.281 [2024-12-06 18:10:14.876728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2832 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:33:27.281 [2024-12-06 18:10:14.876749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:27.281 [2024-12-06 18:10:14.893085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3464 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:33:27.281 [2024-12-06 18:10:14.893111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b2 p:0 m:0 dnr:0 00:33:30.569 Initializing NVMe Controllers 00:33:30.569 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:30.569 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:30.569 Initialization complete. Launching workers. 00:33:30.569 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12346, failed: 6 00:33:30.569 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3638, failed to submit 8714 00:33:30.569 success 689, unsuccessful 2949, failed 0 00:33:30.569 18:10:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:30.569 18:10:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:30.569 [2024-12-06 18:10:18.124992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:888 len:8 PRP1 0x200004e58000 PRP2 0x0 00:33:30.569 [2024-12-06 18:10:18.125027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:33:30.569 [2024-12-06 18:10:18.140924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:1192 len:8 PRP1 0x200004e46000 PRP2 0x0 00:33:30.569 [2024-12-06 18:10:18.140945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:33:30.569 [2024-12-06 18:10:18.172854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:1936 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:33:30.569 [2024-12-06 18:10:18.172878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:33:30.569 [2024-12-06 18:10:18.227910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3136 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:33:30.569 [2024-12-06 18:10:18.227934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:008c p:0 m:0 dnr:0 00:33:33.863 [2024-12-06 18:10:21.236897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c9f0 is same with the state(6) to be set 00:33:33.863 Initializing NVMe Controllers 00:33:33.863 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:33.863 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:33.863 Initialization complete. Launching workers. 00:33:33.863 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8462, failed: 4 00:33:33.864 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1207, failed to submit 7259 00:33:33.864 success 351, unsuccessful 856, failed 0 00:33:33.864 18:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:33.864 18:10:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:35.767 [2024-12-06 18:10:23.333703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:181 nsid:1 lba:217672 len:8 PRP1 0x200004aec000 PRP2 0x0 00:33:35.767 [2024-12-06 18:10:23.333753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:181 cdw0:0 sqhd:0090 p:1 m:0 dnr:0 00:33:35.767 [2024-12-06 18:10:23.568586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:164 nsid:1 lba:245240 len:8 PRP1 0x200004ad0000 PRP2 0x0 00:33:35.767 [2024-12-06 18:10:23.568608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:164 cdw0:0 sqhd:0003 p:1 m:0 dnr:0 00:33:36.702 Initializing NVMe Controllers 00:33:36.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:36.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:36.702 Initialization complete. Launching workers. 00:33:36.702 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 44027, failed: 2 00:33:36.702 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2620, failed to submit 41409 00:33:36.702 success 598, unsuccessful 2022, failed 0 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.702 18:10:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3355586 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3355586 ']' 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3355586 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3355586 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3355586' 00:33:38.604 killing process with pid 3355586 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3355586 00:33:38.604 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3355586 00:33:38.862 00:33:38.862 real 0m12.227s 00:33:38.862 user 0m47.193s 00:33:38.862 sys 0m1.973s 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:38.862 ************************************ 00:33:38.862 END TEST spdk_target_abort 00:33:38.862 ************************************ 00:33:38.862 18:10:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:38.862 18:10:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:38.862 18:10:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.862 18:10:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:38.862 ************************************ 00:33:38.862 START TEST kernel_target_abort 00:33:38.862 ************************************ 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:38.862 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:38.863 18:10:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:41.398 Waiting for block devices as requested 00:33:41.398 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:41.398 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:41.657 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:41.657 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:41.918 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:41.918 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:41.919 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:41.919 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:41.919 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:42.178 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:42.438 No valid GPT data, bailing 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:33:42.438 00:33:42.438 Discovery Log Number of Records 2, Generation counter 2 00:33:42.438 =====Discovery Log Entry 0====== 00:33:42.438 trtype: tcp 00:33:42.438 adrfam: ipv4 00:33:42.438 subtype: current discovery subsystem 00:33:42.438 treq: not specified, sq flow control disable supported 00:33:42.438 portid: 1 00:33:42.438 trsvcid: 4420 00:33:42.438 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:42.438 traddr: 10.0.0.1 00:33:42.438 eflags: none 00:33:42.438 sectype: none 00:33:42.438 =====Discovery Log Entry 1====== 00:33:42.438 trtype: tcp 00:33:42.438 adrfam: ipv4 00:33:42.438 subtype: nvme subsystem 00:33:42.438 treq: not specified, sq flow control disable supported 00:33:42.438 portid: 1 00:33:42.438 trsvcid: 4420 00:33:42.438 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:42.438 traddr: 10.0.0.1 00:33:42.438 eflags: none 00:33:42.438 sectype: none 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:42.438 18:10:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:45.726 Initializing NVMe Controllers 00:33:45.726 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:45.726 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:45.726 Initialization complete. Launching workers. 00:33:45.726 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94999, failed: 0 00:33:45.727 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94999, failed to submit 0 00:33:45.727 success 0, unsuccessful 94999, failed 0 00:33:45.727 18:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:45.727 18:10:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:49.017 Initializing NVMe Controllers 00:33:49.017 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:49.017 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:49.017 Initialization complete. Launching workers. 00:33:49.017 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 154766, failed: 0 00:33:49.017 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38886, failed to submit 115880 00:33:49.017 success 0, unsuccessful 38886, failed 0 00:33:49.017 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:49.017 18:10:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.306 Initializing NVMe Controllers 00:33:52.306 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:52.306 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:52.306 Initialization complete. Launching workers. 00:33:52.306 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146276, failed: 0 00:33:52.306 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36610, failed to submit 109666 00:33:52.306 success 0, unsuccessful 36610, failed 0 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:52.306 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:52.307 18:10:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:54.215 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:54.215 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:56.121 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:56.381 00:33:56.381 real 0m17.519s 00:33:56.381 user 0m8.760s 00:33:56.381 sys 0m4.411s 00:33:56.381 18:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.381 18:10:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:56.381 ************************************ 00:33:56.381 END TEST kernel_target_abort 00:33:56.381 ************************************ 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:56.381 rmmod nvme_tcp 00:33:56.381 rmmod nvme_fabrics 00:33:56.381 rmmod nvme_keyring 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3355586 ']' 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3355586 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3355586 ']' 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3355586 00:33:56.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3355586) - No such process 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3355586 is not found' 00:33:56.381 Process with pid 3355586 is not found 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:56.381 18:10:44 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:58.909 Waiting for block devices as requested 00:33:58.909 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:58.909 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:59.168 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:59.168 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:59.426 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:59.426 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:59.426 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:59.426 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:59.426 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:59.683 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:59.943 18:10:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.846 18:10:49 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.846 00:34:01.846 real 0m43.870s 00:34:01.846 user 0m59.394s 00:34:01.846 sys 0m14.129s 00:34:01.846 18:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.846 18:10:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:01.846 ************************************ 00:34:01.846 END TEST nvmf_abort_qd_sizes 00:34:01.846 ************************************ 00:34:01.846 18:10:49 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:01.846 18:10:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.846 18:10:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.846 18:10:49 -- common/autotest_common.sh@10 -- # set +x 00:34:02.106 ************************************ 00:34:02.106 START TEST keyring_file 00:34:02.106 ************************************ 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:02.106 * Looking for test storage... 00:34:02.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:02.106 18:10:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:02.106 18:10:49 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:02.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.106 --rc genhtml_branch_coverage=1 00:34:02.106 --rc genhtml_function_coverage=1 00:34:02.107 --rc genhtml_legend=1 00:34:02.107 --rc geninfo_all_blocks=1 00:34:02.107 --rc geninfo_unexecuted_blocks=1 00:34:02.107 00:34:02.107 ' 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.107 --rc genhtml_branch_coverage=1 00:34:02.107 --rc genhtml_function_coverage=1 00:34:02.107 --rc genhtml_legend=1 00:34:02.107 --rc geninfo_all_blocks=1 00:34:02.107 --rc geninfo_unexecuted_blocks=1 00:34:02.107 00:34:02.107 ' 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.107 --rc genhtml_branch_coverage=1 00:34:02.107 --rc genhtml_function_coverage=1 00:34:02.107 --rc genhtml_legend=1 00:34:02.107 --rc geninfo_all_blocks=1 00:34:02.107 --rc geninfo_unexecuted_blocks=1 00:34:02.107 00:34:02.107 ' 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:02.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:02.107 --rc genhtml_branch_coverage=1 00:34:02.107 --rc genhtml_function_coverage=1 00:34:02.107 --rc genhtml_legend=1 00:34:02.107 --rc geninfo_all_blocks=1 00:34:02.107 --rc geninfo_unexecuted_blocks=1 00:34:02.107 00:34:02.107 ' 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:02.107 18:10:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:02.107 18:10:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.107 18:10:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.107 18:10:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.107 18:10:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.107 18:10:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.107 18:10:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.107 18:10:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:02.107 18:10:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:02.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Q651gHfOye 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Q651gHfOye 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Q651gHfOye 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Q651gHfOye 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.u46sfMmiPY 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:02.107 18:10:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.u46sfMmiPY 00:34:02.107 18:10:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.u46sfMmiPY 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.u46sfMmiPY 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=3366105 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3366105 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3366105 ']' 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:02.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:02.107 18:10:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:02.107 18:10:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:02.368 [2024-12-06 18:10:49.949069] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:34:02.368 [2024-12-06 18:10:49.949130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366105 ] 00:34:02.368 [2024-12-06 18:10:50.019468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.368 [2024-12-06 18:10:50.059453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.938 18:10:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.938 18:10:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:02.938 18:10:50 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:02.938 18:10:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.938 18:10:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:02.938 [2024-12-06 18:10:50.729242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.938 null0 00:34:02.938 [2024-12-06 18:10:50.761291] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:02.938 [2024-12-06 18:10:50.761520] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.198 18:10:50 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:03.198 [2024-12-06 18:10:50.789349] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:03.198 request: 00:34:03.198 { 00:34:03.198 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:03.198 "secure_channel": false, 00:34:03.198 "listen_address": { 00:34:03.198 "trtype": "tcp", 00:34:03.198 "traddr": "127.0.0.1", 00:34:03.198 "trsvcid": "4420" 00:34:03.198 }, 00:34:03.198 "method": "nvmf_subsystem_add_listener", 00:34:03.198 "req_id": 1 00:34:03.198 } 00:34:03.198 Got JSON-RPC error response 00:34:03.198 response: 00:34:03.198 { 00:34:03.198 "code": -32602, 00:34:03.198 "message": "Invalid parameters" 00:34:03.198 } 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:03.198 18:10:50 keyring_file -- keyring/file.sh@47 -- # bperfpid=3366437 00:34:03.198 18:10:50 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3366437 /var/tmp/bperf.sock 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3366437 ']' 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:03.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.198 18:10:50 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:03.198 18:10:50 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:03.198 [2024-12-06 18:10:50.828806] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:34:03.198 [2024-12-06 18:10:50.828854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366437 ] 00:34:03.198 [2024-12-06 18:10:50.906039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.198 [2024-12-06 18:10:50.942856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.138 18:10:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.138 18:10:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:04.138 18:10:51 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:04.138 18:10:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:04.138 18:10:51 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.u46sfMmiPY 00:34:04.138 18:10:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.u46sfMmiPY 00:34:04.138 18:10:51 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:04.138 18:10:51 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:04.138 18:10:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:04.138 18:10:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:04.138 18:10:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:04.397 18:10:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Q651gHfOye == \/\t\m\p\/\t\m\p\.\Q\6\5\1\g\H\f\O\y\e ]] 00:34:04.397 18:10:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:04.397 18:10:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:04.397 18:10:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:04.397 18:10:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:04.397 18:10:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:04.658 18:10:52 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.u46sfMmiPY == \/\t\m\p\/\t\m\p\.\u\4\6\s\f\M\m\i\P\Y ]] 00:34:04.658 18:10:52 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:04.658 18:10:52 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:04.658 18:10:52 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:04.658 18:10:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:04.918 18:10:52 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:04.918 18:10:52 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:04.918 18:10:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:04.918 [2024-12-06 18:10:52.735087] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:05.178 nvme0n1 00:34:05.178 18:10:52 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:05.178 18:10:52 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:05.178 18:10:52 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:05.178 18:10:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:05.437 18:10:53 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:05.437 18:10:53 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:05.437 Running I/O for 1 seconds... 00:34:06.818 21486.00 IOPS, 83.93 MiB/s 00:34:06.818 Latency(us) 00:34:06.818 [2024-12-06T17:10:54.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.818 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:06.818 nvme0n1 : 1.00 21535.41 84.12 0.00 0.00 5933.70 2157.23 16711.68 00:34:06.818 [2024-12-06T17:10:54.645Z] =================================================================================================================== 00:34:06.818 [2024-12-06T17:10:54.645Z] Total : 21535.41 84.12 0.00 0.00 5933.70 2157.23 16711.68 00:34:06.818 { 00:34:06.818 "results": [ 00:34:06.818 { 00:34:06.818 "job": "nvme0n1", 00:34:06.818 "core_mask": "0x2", 00:34:06.818 "workload": "randrw", 00:34:06.818 "percentage": 50, 00:34:06.818 "status": "finished", 00:34:06.818 "queue_depth": 128, 00:34:06.818 "io_size": 4096, 00:34:06.818 "runtime": 1.003696, 00:34:06.818 "iops": 21535.405142592976, 00:34:06.818 "mibps": 84.12267633825381, 00:34:06.818 "io_failed": 0, 00:34:06.818 "io_timeout": 0, 00:34:06.818 "avg_latency_us": 5933.698556249518, 00:34:06.818 "min_latency_us": 2157.2266666666665, 00:34:06.818 "max_latency_us": 16711.68 00:34:06.818 } 00:34:06.818 ], 00:34:06.818 "core_count": 1 00:34:06.818 } 00:34:06.818 18:10:54 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:06.818 18:10:54 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:06.818 18:10:54 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:06.818 18:10:54 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:06.818 18:10:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.078 18:10:54 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:07.078 18:10:54 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:07.078 18:10:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:07.078 18:10:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:07.078 [2024-12-06 18:10:54.902706] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:07.078 [2024-12-06 18:10:54.902832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78630 (107): Transport endpoint is not connected 00:34:07.078 [2024-12-06 18:10:54.903828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78630 (9): Bad file descriptor 00:34:07.078 [2024-12-06 18:10:54.904830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:07.078 [2024-12-06 18:10:54.904838] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:07.078 [2024-12-06 18:10:54.904844] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:07.078 [2024-12-06 18:10:54.904850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:07.339 request: 00:34:07.339 { 00:34:07.339 "name": "nvme0", 00:34:07.339 "trtype": "tcp", 00:34:07.339 "traddr": "127.0.0.1", 00:34:07.339 "adrfam": "ipv4", 00:34:07.339 "trsvcid": "4420", 00:34:07.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:07.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:07.339 "prchk_reftag": false, 00:34:07.339 "prchk_guard": false, 00:34:07.339 "hdgst": false, 00:34:07.339 "ddgst": false, 00:34:07.339 "psk": "key1", 00:34:07.339 "allow_unrecognized_csi": false, 00:34:07.339 "method": "bdev_nvme_attach_controller", 00:34:07.339 "req_id": 1 00:34:07.339 } 00:34:07.339 Got JSON-RPC error response 00:34:07.339 response: 00:34:07.339 { 00:34:07.339 "code": -5, 00:34:07.339 "message": "Input/output error" 00:34:07.339 } 00:34:07.339 18:10:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:07.339 18:10:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:07.339 18:10:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:07.339 18:10:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:07.339 18:10:54 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:07.339 18:10:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:07.339 18:10:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:07.339 18:10:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:07.339 18:10:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:07.339 18:10:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.339 18:10:55 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:07.339 18:10:55 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:07.339 18:10:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:07.339 18:10:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:07.339 18:10:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:07.339 18:10:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.339 18:10:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:07.601 18:10:55 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:07.601 18:10:55 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:07.601 18:10:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:07.601 18:10:55 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:07.601 18:10:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:07.919 18:10:55 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:07.919 18:10:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:07.919 18:10:55 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:08.252 18:10:55 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:08.252 18:10:55 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:55 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:08.252 [2024-12-06 18:10:55.868384] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Q651gHfOye': 0100660 00:34:08.252 [2024-12-06 18:10:55.868402] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:08.252 request: 00:34:08.252 { 00:34:08.252 "name": "key0", 00:34:08.252 "path": "/tmp/tmp.Q651gHfOye", 00:34:08.252 "method": "keyring_file_add_key", 00:34:08.252 "req_id": 1 00:34:08.252 } 00:34:08.252 Got JSON-RPC error response 00:34:08.252 response: 00:34:08.252 { 00:34:08.252 "code": -1, 00:34:08.252 "message": "Operation not permitted" 00:34:08.252 } 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:08.252 18:10:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:08.252 18:10:55 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:55 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:56 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Q651gHfOye 00:34:08.252 18:10:56 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:08.252 18:10:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:08.252 18:10:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:08.252 18:10:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:08.252 18:10:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:08.252 18:10:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:08.552 18:10:56 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:08.552 18:10:56 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.552 18:10:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:08.552 [2024-12-06 18:10:56.345605] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Q651gHfOye': No such file or directory 00:34:08.552 [2024-12-06 18:10:56.345619] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:08.552 [2024-12-06 18:10:56.345632] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:08.552 [2024-12-06 18:10:56.345638] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:08.552 [2024-12-06 18:10:56.345644] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:08.552 [2024-12-06 18:10:56.345649] bdev_nvme.c:6795:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:08.552 request: 00:34:08.552 { 00:34:08.552 "name": "nvme0", 00:34:08.552 "trtype": "tcp", 00:34:08.552 "traddr": "127.0.0.1", 00:34:08.552 "adrfam": "ipv4", 00:34:08.552 "trsvcid": "4420", 00:34:08.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:08.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:08.552 "prchk_reftag": false, 00:34:08.552 "prchk_guard": false, 00:34:08.552 "hdgst": false, 00:34:08.552 "ddgst": false, 00:34:08.552 "psk": "key0", 00:34:08.552 "allow_unrecognized_csi": false, 00:34:08.552 "method": "bdev_nvme_attach_controller", 00:34:08.552 "req_id": 1 00:34:08.552 } 00:34:08.552 Got JSON-RPC error response 00:34:08.552 response: 00:34:08.552 { 00:34:08.552 "code": -19, 00:34:08.552 "message": "No such device" 00:34:08.552 } 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:08.552 18:10:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:08.552 18:10:56 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:08.552 18:10:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:08.812 18:10:56 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8X8EebreqD 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:08.812 18:10:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:08.812 18:10:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:08.812 18:10:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:08.812 18:10:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:08.812 18:10:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:08.812 18:10:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8X8EebreqD 00:34:08.812 18:10:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8X8EebreqD 00:34:08.812 18:10:56 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.8X8EebreqD 00:34:08.813 18:10:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8X8EebreqD 00:34:08.813 18:10:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8X8EebreqD 00:34:09.071 18:10:56 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:09.071 18:10:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:09.330 nvme0n1 00:34:09.330 18:10:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:09.330 18:10:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:09.330 18:10:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:09.330 18:10:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.330 18:10:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.330 18:10:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.330 18:10:57 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:09.330 18:10:57 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:09.330 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:09.589 18:10:57 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:09.589 18:10:57 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:09.589 18:10:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.589 18:10:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.589 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.849 18:10:57 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:09.849 18:10:57 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:09.849 18:10:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:09.849 18:10:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:09.849 18:10:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:09.849 18:10:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:09.849 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:09.849 18:10:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:09.849 18:10:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:09.849 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:10.108 18:10:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:10.108 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:10.108 18:10:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:10.108 18:10:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:10.108 18:10:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8X8EebreqD 00:34:10.108 18:10:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8X8EebreqD 00:34:10.368 18:10:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.u46sfMmiPY 00:34:10.368 18:10:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.u46sfMmiPY 00:34:10.627 18:10:58 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:10.627 18:10:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:10.627 nvme0n1 00:34:10.888 18:10:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:10.888 18:10:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:10.888 18:10:58 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:10.888 "subsystems": [ 00:34:10.888 { 00:34:10.888 "subsystem": "keyring", 00:34:10.888 "config": [ 00:34:10.888 { 00:34:10.888 "method": "keyring_file_add_key", 00:34:10.888 "params": { 00:34:10.888 "name": "key0", 00:34:10.888 "path": "/tmp/tmp.8X8EebreqD" 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "method": "keyring_file_add_key", 00:34:10.888 "params": { 00:34:10.888 "name": "key1", 00:34:10.888 "path": "/tmp/tmp.u46sfMmiPY" 00:34:10.888 } 00:34:10.888 } 00:34:10.888 ] 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "subsystem": "iobuf", 00:34:10.888 "config": [ 00:34:10.888 { 00:34:10.888 "method": "iobuf_set_options", 00:34:10.888 "params": { 00:34:10.888 "small_pool_count": 8192, 00:34:10.888 "large_pool_count": 1024, 00:34:10.888 "small_bufsize": 8192, 00:34:10.888 "large_bufsize": 135168, 00:34:10.888 "enable_numa": false 00:34:10.888 } 00:34:10.888 } 00:34:10.888 ] 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "subsystem": "sock", 00:34:10.888 "config": [ 00:34:10.888 { 00:34:10.888 "method": "sock_set_default_impl", 00:34:10.888 "params": { 00:34:10.888 "impl_name": "posix" 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "method": "sock_impl_set_options", 00:34:10.888 "params": { 00:34:10.888 "impl_name": "ssl", 00:34:10.888 "recv_buf_size": 4096, 00:34:10.888 "send_buf_size": 4096, 00:34:10.888 "enable_recv_pipe": true, 00:34:10.888 "enable_quickack": false, 00:34:10.888 "enable_placement_id": 0, 00:34:10.888 "enable_zerocopy_send_server": true, 00:34:10.888 "enable_zerocopy_send_client": false, 00:34:10.888 "zerocopy_threshold": 0, 00:34:10.888 "tls_version": 0, 00:34:10.888 "enable_ktls": false 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "method": "sock_impl_set_options", 00:34:10.888 "params": { 00:34:10.888 "impl_name": "posix", 00:34:10.888 "recv_buf_size": 2097152, 00:34:10.888 "send_buf_size": 2097152, 00:34:10.888 "enable_recv_pipe": true, 00:34:10.888 "enable_quickack": false, 00:34:10.888 "enable_placement_id": 0, 00:34:10.888 "enable_zerocopy_send_server": true, 00:34:10.888 "enable_zerocopy_send_client": false, 00:34:10.888 "zerocopy_threshold": 0, 00:34:10.888 "tls_version": 0, 00:34:10.888 "enable_ktls": false 00:34:10.888 } 00:34:10.888 } 00:34:10.888 ] 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "subsystem": "vmd", 00:34:10.888 "config": [] 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "subsystem": "accel", 00:34:10.888 "config": [ 00:34:10.888 { 00:34:10.888 "method": "accel_set_options", 00:34:10.888 "params": { 00:34:10.888 "small_cache_size": 128, 00:34:10.888 "large_cache_size": 16, 00:34:10.888 "task_count": 2048, 00:34:10.888 "sequence_count": 2048, 00:34:10.888 "buf_count": 2048 00:34:10.888 } 00:34:10.888 } 00:34:10.888 ] 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "subsystem": "bdev", 00:34:10.888 "config": [ 00:34:10.888 { 00:34:10.888 "method": "bdev_set_options", 00:34:10.888 "params": { 00:34:10.888 "bdev_io_pool_size": 65535, 00:34:10.888 "bdev_io_cache_size": 256, 00:34:10.888 "bdev_auto_examine": true, 00:34:10.888 "iobuf_small_cache_size": 128, 00:34:10.888 "iobuf_large_cache_size": 16 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "method": "bdev_raid_set_options", 00:34:10.888 "params": { 00:34:10.888 "process_window_size_kb": 1024, 00:34:10.888 "process_max_bandwidth_mb_sec": 0 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "method": "bdev_iscsi_set_options", 00:34:10.888 "params": { 00:34:10.888 "timeout_sec": 30 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.888 "method": "bdev_nvme_set_options", 00:34:10.888 "params": { 00:34:10.888 "action_on_timeout": "none", 00:34:10.888 "timeout_us": 0, 00:34:10.888 "timeout_admin_us": 0, 00:34:10.888 "keep_alive_timeout_ms": 10000, 00:34:10.888 "arbitration_burst": 0, 00:34:10.888 "low_priority_weight": 0, 00:34:10.888 "medium_priority_weight": 0, 00:34:10.888 "high_priority_weight": 0, 00:34:10.888 "nvme_adminq_poll_period_us": 10000, 00:34:10.888 "nvme_ioq_poll_period_us": 0, 00:34:10.888 "io_queue_requests": 512, 00:34:10.888 "delay_cmd_submit": true, 00:34:10.888 "transport_retry_count": 4, 00:34:10.888 "bdev_retry_count": 3, 00:34:10.888 "transport_ack_timeout": 0, 00:34:10.888 "ctrlr_loss_timeout_sec": 0, 00:34:10.888 "reconnect_delay_sec": 0, 00:34:10.888 "fast_io_fail_timeout_sec": 0, 00:34:10.888 "disable_auto_failback": false, 00:34:10.888 "generate_uuids": false, 00:34:10.888 "transport_tos": 0, 00:34:10.888 "nvme_error_stat": false, 00:34:10.888 "rdma_srq_size": 0, 00:34:10.888 "io_path_stat": false, 00:34:10.888 "allow_accel_sequence": false, 00:34:10.888 "rdma_max_cq_size": 0, 00:34:10.888 "rdma_cm_event_timeout_ms": 0, 00:34:10.888 "dhchap_digests": [ 00:34:10.888 "sha256", 00:34:10.888 "sha384", 00:34:10.888 "sha512" 00:34:10.888 ], 00:34:10.888 "dhchap_dhgroups": [ 00:34:10.888 "null", 00:34:10.888 "ffdhe2048", 00:34:10.888 "ffdhe3072", 00:34:10.888 "ffdhe4096", 00:34:10.888 "ffdhe6144", 00:34:10.888 "ffdhe8192" 00:34:10.888 ], 00:34:10.888 "rdma_umr_per_io": false 00:34:10.888 } 00:34:10.888 }, 00:34:10.888 { 00:34:10.889 "method": "bdev_nvme_attach_controller", 00:34:10.889 "params": { 00:34:10.889 "name": "nvme0", 00:34:10.889 "trtype": "TCP", 00:34:10.889 "adrfam": "IPv4", 00:34:10.889 "traddr": "127.0.0.1", 00:34:10.889 "trsvcid": "4420", 00:34:10.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.889 "prchk_reftag": false, 00:34:10.889 "prchk_guard": false, 00:34:10.889 "ctrlr_loss_timeout_sec": 0, 00:34:10.889 "reconnect_delay_sec": 0, 00:34:10.889 "fast_io_fail_timeout_sec": 0, 00:34:10.889 "psk": "key0", 00:34:10.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.889 "hdgst": false, 00:34:10.889 "ddgst": false, 00:34:10.889 "multipath": "multipath" 00:34:10.889 } 00:34:10.889 }, 00:34:10.889 { 00:34:10.889 "method": "bdev_nvme_set_hotplug", 00:34:10.889 "params": { 00:34:10.889 "period_us": 100000, 00:34:10.889 "enable": false 00:34:10.889 } 00:34:10.889 }, 00:34:10.889 { 00:34:10.889 "method": "bdev_wait_for_examine" 00:34:10.889 } 00:34:10.889 ] 00:34:10.889 }, 00:34:10.889 { 00:34:10.889 "subsystem": "nbd", 00:34:10.889 "config": [] 00:34:10.889 } 00:34:10.889 ] 00:34:10.889 }' 00:34:10.889 18:10:58 keyring_file -- keyring/file.sh@115 -- # killprocess 3366437 00:34:10.889 18:10:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3366437 ']' 00:34:10.889 18:10:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3366437 00:34:10.889 18:10:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:10.889 18:10:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.889 18:10:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3366437 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3366437' 00:34:11.149 killing process with pid 3366437 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@973 -- # kill 3366437 00:34:11.149 Received shutdown signal, test time was about 1.000000 seconds 00:34:11.149 00:34:11.149 Latency(us) 00:34:11.149 [2024-12-06T17:10:58.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:11.149 [2024-12-06T17:10:58.976Z] =================================================================================================================== 00:34:11.149 [2024-12-06T17:10:58.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@978 -- # wait 3366437 00:34:11.149 18:10:58 keyring_file -- keyring/file.sh@118 -- # bperfpid=3368243 00:34:11.149 18:10:58 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3368243 /var/tmp/bperf.sock 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3368243 ']' 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:11.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.149 18:10:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:11.149 18:10:58 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:11.149 18:10:58 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:11.149 "subsystems": [ 00:34:11.149 { 00:34:11.149 "subsystem": "keyring", 00:34:11.149 "config": [ 00:34:11.149 { 00:34:11.149 "method": "keyring_file_add_key", 00:34:11.149 "params": { 00:34:11.149 "name": "key0", 00:34:11.149 "path": "/tmp/tmp.8X8EebreqD" 00:34:11.149 } 00:34:11.149 }, 00:34:11.149 { 00:34:11.149 "method": "keyring_file_add_key", 00:34:11.149 "params": { 00:34:11.149 "name": "key1", 00:34:11.149 "path": "/tmp/tmp.u46sfMmiPY" 00:34:11.149 } 00:34:11.149 } 00:34:11.149 ] 00:34:11.149 }, 00:34:11.149 { 00:34:11.149 "subsystem": "iobuf", 00:34:11.149 "config": [ 00:34:11.150 { 00:34:11.150 "method": "iobuf_set_options", 00:34:11.150 "params": { 00:34:11.150 "small_pool_count": 8192, 00:34:11.150 "large_pool_count": 1024, 00:34:11.150 "small_bufsize": 8192, 00:34:11.150 "large_bufsize": 135168, 00:34:11.150 "enable_numa": false 00:34:11.150 } 00:34:11.150 } 00:34:11.150 ] 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "subsystem": "sock", 00:34:11.150 "config": [ 00:34:11.150 { 00:34:11.150 "method": "sock_set_default_impl", 00:34:11.150 "params": { 00:34:11.150 "impl_name": "posix" 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "sock_impl_set_options", 00:34:11.150 "params": { 00:34:11.150 "impl_name": "ssl", 00:34:11.150 "recv_buf_size": 4096, 00:34:11.150 "send_buf_size": 4096, 00:34:11.150 "enable_recv_pipe": true, 00:34:11.150 "enable_quickack": false, 00:34:11.150 "enable_placement_id": 0, 00:34:11.150 "enable_zerocopy_send_server": true, 00:34:11.150 "enable_zerocopy_send_client": false, 00:34:11.150 "zerocopy_threshold": 0, 00:34:11.150 "tls_version": 0, 00:34:11.150 "enable_ktls": false 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "sock_impl_set_options", 00:34:11.150 "params": { 00:34:11.150 "impl_name": "posix", 00:34:11.150 "recv_buf_size": 2097152, 00:34:11.150 "send_buf_size": 2097152, 00:34:11.150 "enable_recv_pipe": true, 00:34:11.150 "enable_quickack": false, 00:34:11.150 "enable_placement_id": 0, 00:34:11.150 "enable_zerocopy_send_server": true, 00:34:11.150 "enable_zerocopy_send_client": false, 00:34:11.150 "zerocopy_threshold": 0, 00:34:11.150 "tls_version": 0, 00:34:11.150 "enable_ktls": false 00:34:11.150 } 00:34:11.150 } 00:34:11.150 ] 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "subsystem": "vmd", 00:34:11.150 "config": [] 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "subsystem": "accel", 00:34:11.150 "config": [ 00:34:11.150 { 00:34:11.150 "method": "accel_set_options", 00:34:11.150 "params": { 00:34:11.150 "small_cache_size": 128, 00:34:11.150 "large_cache_size": 16, 00:34:11.150 "task_count": 2048, 00:34:11.150 "sequence_count": 2048, 00:34:11.150 "buf_count": 2048 00:34:11.150 } 00:34:11.150 } 00:34:11.150 ] 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "subsystem": "bdev", 00:34:11.150 "config": [ 00:34:11.150 { 00:34:11.150 "method": "bdev_set_options", 00:34:11.150 "params": { 00:34:11.150 "bdev_io_pool_size": 65535, 00:34:11.150 "bdev_io_cache_size": 256, 00:34:11.150 "bdev_auto_examine": true, 00:34:11.150 "iobuf_small_cache_size": 128, 00:34:11.150 "iobuf_large_cache_size": 16 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "bdev_raid_set_options", 00:34:11.150 "params": { 00:34:11.150 "process_window_size_kb": 1024, 00:34:11.150 "process_max_bandwidth_mb_sec": 0 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "bdev_iscsi_set_options", 00:34:11.150 "params": { 00:34:11.150 "timeout_sec": 30 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "bdev_nvme_set_options", 00:34:11.150 "params": { 00:34:11.150 "action_on_timeout": "none", 00:34:11.150 "timeout_us": 0, 00:34:11.150 "timeout_admin_us": 0, 00:34:11.150 "keep_alive_timeout_ms": 10000, 00:34:11.150 "arbitration_burst": 0, 00:34:11.150 "low_priority_weight": 0, 00:34:11.150 "medium_priority_weight": 0, 00:34:11.150 "high_priority_weight": 0, 00:34:11.150 "nvme_adminq_poll_period_us": 10000, 00:34:11.150 "nvme_ioq_poll_period_us": 0, 00:34:11.150 "io_queue_requests": 512, 00:34:11.150 "delay_cmd_submit": true, 00:34:11.150 "transport_retry_count": 4, 00:34:11.150 "bdev_retry_count": 3, 00:34:11.150 "transport_ack_timeout": 0, 00:34:11.150 "ctrlr_loss_timeout_sec": 0, 00:34:11.150 "reconnect_delay_sec": 0, 00:34:11.150 "fast_io_fail_timeout_sec": 0, 00:34:11.150 "disable_auto_failback": false, 00:34:11.150 "generate_uuids": false, 00:34:11.150 "transport_tos": 0, 00:34:11.150 "nvme_error_stat": false, 00:34:11.150 "rdma_srq_size": 0, 00:34:11.150 "io_path_stat": false, 00:34:11.150 "allow_accel_sequence": false, 00:34:11.150 "rdma_max_cq_size": 0, 00:34:11.150 "rdma_cm_event_timeout_ms": 0, 00:34:11.150 "dhchap_digests": [ 00:34:11.150 "sha256", 00:34:11.150 "sha384", 00:34:11.150 "sha512" 00:34:11.150 ], 00:34:11.150 "dhchap_dhgroups": [ 00:34:11.150 "null", 00:34:11.150 "ffdhe2048", 00:34:11.150 "ffdhe3072", 00:34:11.150 "ffdhe4096", 00:34:11.150 "ffdhe6144", 00:34:11.150 "ffdhe8192" 00:34:11.150 ], 00:34:11.150 "rdma_umr_per_io": false 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "bdev_nvme_attach_controller", 00:34:11.150 "params": { 00:34:11.150 "name": "nvme0", 00:34:11.150 "trtype": "TCP", 00:34:11.150 "adrfam": "IPv4", 00:34:11.150 "traddr": "127.0.0.1", 00:34:11.150 "trsvcid": "4420", 00:34:11.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.150 "prchk_reftag": false, 00:34:11.150 "prchk_guard": false, 00:34:11.150 "ctrlr_loss_timeout_sec": 0, 00:34:11.150 "reconnect_delay_sec": 0, 00:34:11.150 "fast_io_fail_timeout_sec": 0, 00:34:11.150 "psk": "key0", 00:34:11.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.150 "hdgst": false, 00:34:11.150 "ddgst": false, 00:34:11.150 "multipath": "multipath" 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "bdev_nvme_set_hotplug", 00:34:11.150 "params": { 00:34:11.150 "period_us": 100000, 00:34:11.150 "enable": false 00:34:11.150 } 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "method": "bdev_wait_for_examine" 00:34:11.150 } 00:34:11.150 ] 00:34:11.150 }, 00:34:11.150 { 00:34:11.150 "subsystem": "nbd", 00:34:11.150 "config": [] 00:34:11.150 } 00:34:11.150 ] 00:34:11.150 }' 00:34:11.150 [2024-12-06 18:10:58.851285] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:34:11.150 [2024-12-06 18:10:58.851341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368243 ] 00:34:11.150 [2024-12-06 18:10:58.915493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.150 [2024-12-06 18:10:58.944782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.410 [2024-12-06 18:10:59.089206] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:11.979 18:10:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.979 18:10:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:34:11.979 18:10:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:11.979 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.979 18:10:59 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:11.979 18:10:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:11.979 18:10:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:11.979 18:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:11.979 18:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:11.979 18:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:11.979 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:11.979 18:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:12.239 18:10:59 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:12.239 18:10:59 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:12.239 18:10:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:12.239 18:10:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:12.239 18:10:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:12.239 18:10:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:12.239 18:10:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:12.499 18:11:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8X8EebreqD /tmp/tmp.u46sfMmiPY 00:34:12.499 18:11:00 keyring_file -- keyring/file.sh@20 -- # killprocess 3368243 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3368243 ']' 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3368243 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3368243 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3368243' 00:34:12.499 killing process with pid 3368243 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@973 -- # kill 3368243 00:34:12.499 Received shutdown signal, test time was about 1.000000 seconds 00:34:12.499 00:34:12.499 Latency(us) 00:34:12.499 [2024-12-06T17:11:00.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.499 [2024-12-06T17:11:00.326Z] =================================================================================================================== 00:34:12.499 [2024-12-06T17:11:00.326Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:12.499 18:11:00 keyring_file -- common/autotest_common.sh@978 -- # wait 3368243 00:34:12.759 18:11:00 keyring_file -- keyring/file.sh@21 -- # killprocess 3366105 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3366105 ']' 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3366105 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3366105 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3366105' 00:34:12.759 killing process with pid 3366105 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@973 -- # kill 3366105 00:34:12.759 18:11:00 keyring_file -- common/autotest_common.sh@978 -- # wait 3366105 00:34:13.020 00:34:13.020 real 0m10.959s 00:34:13.020 user 0m26.208s 00:34:13.020 sys 0m2.137s 00:34:13.020 18:11:00 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:13.020 18:11:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:13.020 ************************************ 00:34:13.020 END TEST keyring_file 00:34:13.020 ************************************ 00:34:13.020 18:11:00 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:34:13.020 18:11:00 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:13.020 18:11:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:13.020 18:11:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:13.020 18:11:00 -- common/autotest_common.sh@10 -- # set +x 00:34:13.020 ************************************ 00:34:13.020 START TEST keyring_linux 00:34:13.020 ************************************ 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:13.020 Joined session keyring: 467579714 00:34:13.020 * Looking for test storage... 00:34:13.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.020 18:11:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.020 --rc genhtml_branch_coverage=1 00:34:13.020 --rc genhtml_function_coverage=1 00:34:13.020 --rc genhtml_legend=1 00:34:13.020 --rc geninfo_all_blocks=1 00:34:13.020 --rc geninfo_unexecuted_blocks=1 00:34:13.020 00:34:13.020 ' 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.020 --rc genhtml_branch_coverage=1 00:34:13.020 --rc genhtml_function_coverage=1 00:34:13.020 --rc genhtml_legend=1 00:34:13.020 --rc geninfo_all_blocks=1 00:34:13.020 --rc geninfo_unexecuted_blocks=1 00:34:13.020 00:34:13.020 ' 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.020 --rc genhtml_branch_coverage=1 00:34:13.020 --rc genhtml_function_coverage=1 00:34:13.020 --rc genhtml_legend=1 00:34:13.020 --rc geninfo_all_blocks=1 00:34:13.020 --rc geninfo_unexecuted_blocks=1 00:34:13.020 00:34:13.020 ' 00:34:13.020 18:11:00 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:13.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.020 --rc genhtml_branch_coverage=1 00:34:13.020 --rc genhtml_function_coverage=1 00:34:13.020 --rc genhtml_legend=1 00:34:13.020 --rc geninfo_all_blocks=1 00:34:13.020 --rc geninfo_unexecuted_blocks=1 00:34:13.020 00:34:13.020 ' 00:34:13.020 18:11:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:13.020 18:11:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.020 18:11:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.021 18:11:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.021 18:11:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.021 18:11:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.021 18:11:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.021 18:11:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.021 18:11:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.021 18:11:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.021 18:11:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:13.021 18:11:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:13.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:13.021 18:11:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:13.021 18:11:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:13.021 18:11:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:13.021 18:11:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:13.021 18:11:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:13.021 18:11:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:13.021 18:11:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:13.021 18:11:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:13.281 /tmp/:spdk-test:key0 00:34:13.281 18:11:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:13.281 18:11:00 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:13.281 18:11:00 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:13.281 18:11:00 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:13.281 18:11:00 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:13.281 18:11:00 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:13.281 18:11:00 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:13.281 18:11:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:13.281 /tmp/:spdk-test:key1 00:34:13.281 18:11:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3368678 00:34:13.281 18:11:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3368678 00:34:13.281 18:11:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3368678 ']' 00:34:13.281 18:11:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:13.281 18:11:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:13.281 18:11:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:13.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:13.281 18:11:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:13.281 18:11:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:13.281 18:11:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:13.281 [2024-12-06 18:11:00.937922] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:34:13.281 [2024-12-06 18:11:00.937980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368678 ] 00:34:13.281 [2024-12-06 18:11:01.002207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.281 [2024-12-06 18:11:01.032333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:13.542 18:11:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:13.542 [2024-12-06 18:11:01.201575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:13.542 null0 00:34:13.542 [2024-12-06 18:11:01.233633] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:13.542 [2024-12-06 18:11:01.233997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.542 18:11:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:13.542 427193473 00:34:13.542 18:11:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:13.542 998627351 00:34:13.542 18:11:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3368798 00:34:13.542 18:11:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3368798 /var/tmp/bperf.sock 00:34:13.542 18:11:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3368798 ']' 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:13.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:13.542 18:11:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:13.542 [2024-12-06 18:11:01.291023] Starting SPDK v25.01-pre git sha1 88dfb58dc / DPDK 24.03.0 initialization... 00:34:13.542 [2024-12-06 18:11:01.291070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368798 ] 00:34:13.542 [2024-12-06 18:11:01.355363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.802 [2024-12-06 18:11:01.385140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:13.802 18:11:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.802 18:11:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:34:13.802 18:11:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:13.802 18:11:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:13.802 18:11:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:13.802 18:11:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:14.062 18:11:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:14.062 18:11:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:14.322 [2024-12-06 18:11:01.924799] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:14.322 nvme0n1 00:34:14.322 18:11:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:14.322 18:11:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:14.322 18:11:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:14.322 18:11:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:14.322 18:11:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:14.322 18:11:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:14.581 18:11:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:14.581 18:11:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:14.581 18:11:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@25 -- # sn=427193473 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 427193473 == \4\2\7\1\9\3\4\7\3 ]] 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 427193473 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:14.581 18:11:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:14.842 Running I/O for 1 seconds... 00:34:15.782 24340.00 IOPS, 95.08 MiB/s 00:34:15.782 Latency(us) 00:34:15.782 [2024-12-06T17:11:03.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.782 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:15.782 nvme0n1 : 1.01 24341.30 95.08 0.00 0.00 5242.93 4341.76 14527.15 00:34:15.782 [2024-12-06T17:11:03.609Z] =================================================================================================================== 00:34:15.782 [2024-12-06T17:11:03.609Z] Total : 24341.30 95.08 0.00 0.00 5242.93 4341.76 14527.15 00:34:15.782 { 00:34:15.782 "results": [ 00:34:15.782 { 00:34:15.782 "job": "nvme0n1", 00:34:15.782 "core_mask": "0x2", 00:34:15.782 "workload": "randread", 00:34:15.782 "status": "finished", 00:34:15.782 "queue_depth": 128, 00:34:15.782 "io_size": 4096, 00:34:15.782 "runtime": 1.005205, 00:34:15.782 "iops": 24341.303515203366, 00:34:15.782 "mibps": 95.08321685626315, 00:34:15.782 "io_failed": 0, 00:34:15.782 "io_timeout": 0, 00:34:15.782 "avg_latency_us": 5242.931336711896, 00:34:15.782 "min_latency_us": 4341.76, 00:34:15.782 "max_latency_us": 14527.146666666667 00:34:15.782 } 00:34:15.782 ], 00:34:15.782 "core_count": 1 00:34:15.782 } 00:34:15.782 18:11:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:15.782 18:11:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:15.782 18:11:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:15.782 18:11:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:15.782 18:11:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:15.782 18:11:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:15.782 18:11:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:15.782 18:11:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:16.042 18:11:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:16.042 18:11:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:16.042 18:11:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:16.042 18:11:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:16.042 18:11:03 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:16.042 18:11:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:16.302 [2024-12-06 18:11:03.912692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:16.302 [2024-12-06 18:11:03.913501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c03e0 (107): Transport endpoint is not connected 00:34:16.302 [2024-12-06 18:11:03.914497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c03e0 (9): Bad file descriptor 00:34:16.302 [2024-12-06 18:11:03.915500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:16.302 [2024-12-06 18:11:03.915508] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:16.302 [2024-12-06 18:11:03.915513] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:16.302 [2024-12-06 18:11:03.915520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:16.302 request: 00:34:16.302 { 00:34:16.302 "name": "nvme0", 00:34:16.302 "trtype": "tcp", 00:34:16.302 "traddr": "127.0.0.1", 00:34:16.302 "adrfam": "ipv4", 00:34:16.302 "trsvcid": "4420", 00:34:16.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:16.302 "prchk_reftag": false, 00:34:16.302 "prchk_guard": false, 00:34:16.302 "hdgst": false, 00:34:16.302 "ddgst": false, 00:34:16.302 "psk": ":spdk-test:key1", 00:34:16.302 "allow_unrecognized_csi": false, 00:34:16.302 "method": "bdev_nvme_attach_controller", 00:34:16.302 "req_id": 1 00:34:16.302 } 00:34:16.302 Got JSON-RPC error response 00:34:16.302 response: 00:34:16.302 { 00:34:16.302 "code": -5, 00:34:16.302 "message": "Input/output error" 00:34:16.302 } 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@33 -- # sn=427193473 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 427193473 00:34:16.302 1 links removed 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@33 -- # sn=998627351 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 998627351 00:34:16.302 1 links removed 00:34:16.302 18:11:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3368798 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3368798 ']' 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3368798 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.302 18:11:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3368798 00:34:16.303 18:11:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:16.303 18:11:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:16.303 18:11:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3368798' 00:34:16.303 killing process with pid 3368798 00:34:16.303 18:11:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 3368798 00:34:16.303 Received shutdown signal, test time was about 1.000000 seconds 00:34:16.303 00:34:16.303 Latency(us) 00:34:16.303 [2024-12-06T17:11:04.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.303 [2024-12-06T17:11:04.130Z] =================================================================================================================== 00:34:16.303 [2024-12-06T17:11:04.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:16.303 18:11:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 3368798 00:34:16.303 18:11:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3368678 00:34:16.303 18:11:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3368678 ']' 00:34:16.303 18:11:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3368678 00:34:16.303 18:11:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:34:16.303 18:11:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.303 18:11:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3368678 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3368678' 00:34:16.564 killing process with pid 3368678 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 3368678 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 3368678 00:34:16.564 00:34:16.564 real 0m3.622s 00:34:16.564 user 0m6.885s 00:34:16.564 sys 0m1.170s 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.564 18:11:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:16.564 ************************************ 00:34:16.564 END TEST keyring_linux 00:34:16.564 ************************************ 00:34:16.564 18:11:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:16.564 18:11:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:16.564 18:11:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:16.564 18:11:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:16.564 18:11:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:16.564 18:11:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:16.564 18:11:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:16.564 18:11:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:16.564 18:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:16.564 18:11:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:16.564 18:11:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:16.564 18:11:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:16.564 18:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:21.837 INFO: APP EXITING 00:34:21.837 INFO: killing all VMs 00:34:21.837 INFO: killing vhost app 00:34:21.837 INFO: EXIT DONE 00:34:23.743 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:23.743 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:23.743 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:24.002 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:24.002 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:26.537 Cleaning 00:34:26.537 Removing: /var/run/dpdk/spdk0/config 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:26.537 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:26.537 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:26.537 Removing: /var/run/dpdk/spdk1/config 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:26.537 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:26.537 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:26.537 Removing: /var/run/dpdk/spdk2/config 00:34:26.537 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:26.537 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:26.537 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:26.537 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:26.537 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:26.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:26.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:26.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:26.538 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:26.538 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:26.538 Removing: /var/run/dpdk/spdk3/config 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:26.538 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:26.538 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:26.538 Removing: /var/run/dpdk/spdk4/config 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:26.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:26.798 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:26.798 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:26.798 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:26.798 Removing: /dev/shm/bdev_svc_trace.1 00:34:26.798 Removing: /dev/shm/nvmf_trace.0 00:34:26.798 Removing: /dev/shm/spdk_tgt_trace.pid2768119 00:34:26.798 Removing: /var/run/dpdk/spdk0 00:34:26.798 Removing: /var/run/dpdk/spdk1 00:34:26.798 Removing: /var/run/dpdk/spdk2 00:34:26.798 Removing: /var/run/dpdk/spdk3 00:34:26.798 Removing: /var/run/dpdk/spdk4 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2766360 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2768119 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2768688 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2770032 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2770060 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2771445 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2771451 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2771707 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2772761 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2773505 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2773896 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2774289 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2774708 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2775105 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2775360 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2775505 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2775882 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2776596 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2780172 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2780355 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2780564 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2780570 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2781017 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2781234 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2781650 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2781656 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2782014 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2782019 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2782281 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2782390 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2782903 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2783296 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2783691 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2788666 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2794085 00:34:26.798 Removing: /var/run/dpdk/spdk_pid2807103 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2807890 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2813486 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2813864 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2819262 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2826624 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2830017 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2842954 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2854901 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2857240 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2858570 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2880240 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2885321 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2944767 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2951480 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2958991 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2967032 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2967098 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2968237 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2969330 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2970561 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2971231 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2971242 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2971572 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2971819 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2971908 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2972907 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2974306 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2975777 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2976484 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2976641 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2976942 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2978226 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2979304 00:34:26.799 Removing: /var/run/dpdk/spdk_pid2989929 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3024923 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3030628 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3032924 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3035286 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3035600 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3035623 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3035947 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3036333 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3038680 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3039743 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3040123 00:34:26.799 Removing: /var/run/dpdk/spdk_pid3043147 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3043849 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3044559 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3049635 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3056927 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3056928 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3056929 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3061704 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3072615 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3078128 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3086216 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3087773 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3089501 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3091352 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3097381 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3102872 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3108092 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3117675 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3117696 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3123020 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3123327 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3123597 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3124102 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3124113 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3129997 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3130661 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3136159 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3139824 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3146846 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3154220 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3164661 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3173621 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3173623 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3196897 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3197583 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3198261 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3198935 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3199875 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3200534 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3201258 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3201867 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3207174 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3207531 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3215668 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3216046 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3222851 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3228214 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3240852 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3241843 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3247230 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3247583 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3252809 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3259996 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3263390 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3276524 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3287774 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3290092 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3291217 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3311679 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3316523 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3320240 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3327830 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3327961 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3334115 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3337089 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3339811 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3341312 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3343933 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3345448 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3355670 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3356380 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3357225 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3360005 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3360666 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3361334 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3366105 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3366437 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3368243 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3368678 00:34:27.058 Removing: /var/run/dpdk/spdk_pid3368798 00:34:27.058 Clean 00:34:27.058 18:11:14 -- common/autotest_common.sh@1453 -- # return 0 00:34:27.058 18:11:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:27.058 18:11:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.058 18:11:14 -- common/autotest_common.sh@10 -- # set +x 00:34:27.318 18:11:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:27.318 18:11:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.318 18:11:14 -- common/autotest_common.sh@10 -- # set +x 00:34:27.318 18:11:14 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:27.318 18:11:14 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:27.318 18:11:14 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:27.318 18:11:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:27.318 18:11:14 -- spdk/autotest.sh@398 -- # hostname 00:34:27.318 18:11:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:27.318 geninfo: WARNING: invalid characters removed from testname! 00:34:45.421 18:11:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:46.802 18:11:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:48.711 18:11:36 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:50.615 18:11:37 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:51.996 18:11:39 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:53.519 18:11:41 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:55.429 18:11:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:55.429 18:11:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:55.429 18:11:42 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:34:55.429 18:11:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:55.429 18:11:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:55.429 18:11:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:55.429 + [[ -n 2685762 ]] 00:34:55.429 + sudo kill 2685762 00:34:55.439 [Pipeline] } 00:34:55.458 [Pipeline] // stage 00:34:55.462 [Pipeline] } 00:34:55.474 [Pipeline] // timeout 00:34:55.479 [Pipeline] } 00:34:55.497 [Pipeline] // catchError 00:34:55.501 [Pipeline] } 00:34:55.517 [Pipeline] // wrap 00:34:55.521 [Pipeline] } 00:34:55.534 [Pipeline] // catchError 00:34:55.543 [Pipeline] stage 00:34:55.546 [Pipeline] { (Epilogue) 00:34:55.559 [Pipeline] catchError 00:34:55.561 [Pipeline] { 00:34:55.574 [Pipeline] echo 00:34:55.575 Cleanup processes 00:34:55.579 [Pipeline] sh 00:34:55.859 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:55.859 3380772 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:55.876 [Pipeline] sh 00:34:56.164 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:56.164 ++ grep -v 'sudo pgrep' 00:34:56.164 ++ awk '{print $1}' 00:34:56.164 + sudo kill -9 00:34:56.164 + true 00:34:56.176 [Pipeline] sh 00:34:56.461 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:06.459 [Pipeline] sh 00:35:06.774 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:06.774 Artifacts sizes are good 00:35:06.791 [Pipeline] archiveArtifacts 00:35:06.800 Archiving artifacts 00:35:06.921 [Pipeline] sh 00:35:07.205 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:07.216 [Pipeline] cleanWs 00:35:07.223 [WS-CLEANUP] Deleting project workspace... 00:35:07.223 [WS-CLEANUP] Deferred wipeout is used... 00:35:07.228 [WS-CLEANUP] done 00:35:07.229 [Pipeline] } 00:35:07.239 [Pipeline] // catchError 00:35:07.247 [Pipeline] sh 00:35:07.524 + logger -p user.info -t JENKINS-CI 00:35:07.532 [Pipeline] } 00:35:07.541 [Pipeline] // stage 00:35:07.545 [Pipeline] } 00:35:07.558 [Pipeline] // node 00:35:07.563 [Pipeline] End of Pipeline 00:35:07.595 Finished: SUCCESS